Dec 12 00:06:04 crc systemd[1]: Starting Kubernetes Kubelet... Dec 12 00:06:04 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:05 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 00:06:05 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 12 00:06:05 crc kubenswrapper[4917]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 00:06:05 crc kubenswrapper[4917]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 12 00:06:05 crc kubenswrapper[4917]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 00:06:05 crc kubenswrapper[4917]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 00:06:05 crc kubenswrapper[4917]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 12 00:06:05 crc kubenswrapper[4917]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.441775 4917 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444720 4917 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444740 4917 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444744 4917 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444748 4917 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444752 4917 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444758 4917 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444764 4917 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444771 4917 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444776 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444784 4917 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444789 4917 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444794 4917 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444799 4917 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444811 4917 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444816 4917 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444820 4917 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444824 4917 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444828 4917 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444832 4917 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444836 4917 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444839 4917 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444843 4917 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444847 4917 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444851 4917 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444855 4917 feature_gate.go:330] unrecognized feature gate: Example Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444858 4917 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444862 4917 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444866 4917 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444870 4917 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444873 4917 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444877 4917 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444881 4917 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444885 4917 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444889 4917 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444892 4917 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444896 4917 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444899 4917 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444903 4917 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444907 4917 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444910 4917 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444914 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444919 4917 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444924 4917 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444927 4917 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444931 4917 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444934 4917 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444938 4917 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444942 4917 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444945 4917 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444951 4917 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444956 4917 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444960 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444964 4917 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444969 4917 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444973 4917 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444977 4917 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444980 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444984 4917 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444988 4917 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444991 4917 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444994 4917 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.444998 4917 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.445001 4917 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.445004 4917 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.445008 4917 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.445011 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.445014 4917 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.445018 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.445021 4917 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.445024 4917 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.445028 4917 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445214 4917 flags.go:64] FLAG: --address="0.0.0.0" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445226 4917 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445234 4917 flags.go:64] FLAG: --anonymous-auth="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445244 4917 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445249 4917 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445254 4917 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445260 4917 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445266 4917 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445270 4917 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445275 4917 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445279 4917 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445283 4917 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445287 4917 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445291 4917 flags.go:64] FLAG: --cgroup-root="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445296 4917 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445300 4917 flags.go:64] FLAG: --client-ca-file="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445304 4917 flags.go:64] FLAG: --cloud-config="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445308 4917 flags.go:64] FLAG: --cloud-provider="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445312 4917 flags.go:64] FLAG: --cluster-dns="[]" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445317 4917 flags.go:64] FLAG: --cluster-domain="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445321 4917 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445325 4917 flags.go:64] FLAG: --config-dir="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445329 4917 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445333 4917 flags.go:64] FLAG: --container-log-max-files="5" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445338 4917 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445342 4917 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445346 4917 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445351 4917 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445355 4917 flags.go:64] FLAG: --contention-profiling="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445359 4917 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445363 4917 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445368 4917 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445372 4917 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445377 4917 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445381 4917 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445386 4917 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445390 4917 flags.go:64] FLAG: --enable-load-reader="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445394 4917 flags.go:64] FLAG: --enable-server="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445398 4917 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445404 4917 flags.go:64] FLAG: --event-burst="100" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445408 4917 flags.go:64] FLAG: --event-qps="50" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445412 4917 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445416 4917 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445420 4917 flags.go:64] FLAG: --eviction-hard="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445425 4917 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445429 4917 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445433 4917 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445437 4917 flags.go:64] FLAG: --eviction-soft="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445441 4917 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445445 4917 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445451 4917 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445455 4917 flags.go:64] FLAG: --experimental-mounter-path="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445459 4917 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445463 4917 flags.go:64] FLAG: --fail-swap-on="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445467 4917 flags.go:64] FLAG: --feature-gates="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445472 4917 flags.go:64] FLAG: --file-check-frequency="20s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445477 4917 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445481 4917 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445487 4917 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445493 4917 flags.go:64] FLAG: --healthz-port="10248" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445498 4917 flags.go:64] FLAG: --help="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445505 4917 flags.go:64] FLAG: --hostname-override="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445510 4917 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445515 4917 flags.go:64] FLAG: --http-check-frequency="20s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445520 4917 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445525 4917 flags.go:64] FLAG: --image-credential-provider-config="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445530 4917 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445535 4917 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445540 4917 flags.go:64] FLAG: --image-service-endpoint="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445545 4917 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445550 4917 flags.go:64] FLAG: --kube-api-burst="100" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445555 4917 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445560 4917 flags.go:64] FLAG: --kube-api-qps="50" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445564 4917 flags.go:64] FLAG: --kube-reserved="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445569 4917 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445573 4917 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445579 4917 flags.go:64] FLAG: --kubelet-cgroups="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445584 4917 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445588 4917 flags.go:64] FLAG: --lock-file="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445592 4917 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445596 4917 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445602 4917 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445608 4917 flags.go:64] FLAG: --log-json-split-stream="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445612 4917 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445617 4917 flags.go:64] FLAG: --log-text-split-stream="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445622 4917 flags.go:64] FLAG: --logging-format="text" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445627 4917 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445631 4917 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445635 4917 flags.go:64] FLAG: --manifest-url="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445656 4917 flags.go:64] FLAG: --manifest-url-header="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445663 4917 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445667 4917 flags.go:64] FLAG: --max-open-files="1000000" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445673 4917 flags.go:64] FLAG: --max-pods="110" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445677 4917 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445681 4917 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445686 4917 flags.go:64] FLAG: --memory-manager-policy="None" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445690 4917 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445694 4917 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445698 4917 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445704 4917 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445713 4917 flags.go:64] FLAG: --node-status-max-images="50" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445718 4917 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445722 4917 flags.go:64] FLAG: --oom-score-adj="-999" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445726 4917 flags.go:64] FLAG: --pod-cidr="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445730 4917 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445736 4917 flags.go:64] FLAG: --pod-manifest-path="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445739 4917 flags.go:64] FLAG: --pod-max-pids="-1" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445744 4917 flags.go:64] FLAG: --pods-per-core="0" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445748 4917 flags.go:64] FLAG: --port="10250" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445752 4917 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445757 4917 flags.go:64] FLAG: --provider-id="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445761 4917 flags.go:64] FLAG: --qos-reserved="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445766 4917 flags.go:64] FLAG: --read-only-port="10255" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445771 4917 flags.go:64] FLAG: --register-node="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445775 4917 flags.go:64] FLAG: --register-schedulable="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445780 4917 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445787 4917 flags.go:64] FLAG: --registry-burst="10" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445792 4917 flags.go:64] FLAG: --registry-qps="5" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445797 4917 flags.go:64] FLAG: --reserved-cpus="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445801 4917 flags.go:64] FLAG: --reserved-memory="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445806 4917 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445811 4917 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445816 4917 flags.go:64] FLAG: --rotate-certificates="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445820 4917 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445824 4917 flags.go:64] FLAG: --runonce="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445828 4917 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445832 4917 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445837 4917 flags.go:64] FLAG: --seccomp-default="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445840 4917 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445845 4917 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445849 4917 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445853 4917 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445858 4917 flags.go:64] FLAG: --storage-driver-password="root" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445862 4917 flags.go:64] FLAG: --storage-driver-secure="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445866 4917 flags.go:64] FLAG: --storage-driver-table="stats" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445869 4917 flags.go:64] FLAG: --storage-driver-user="root" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445873 4917 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445878 4917 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445882 4917 flags.go:64] FLAG: --system-cgroups="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445886 4917 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445892 4917 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445896 4917 flags.go:64] FLAG: --tls-cert-file="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445900 4917 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445906 4917 flags.go:64] FLAG: --tls-min-version="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445914 4917 flags.go:64] FLAG: --tls-private-key-file="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445918 4917 flags.go:64] FLAG: --topology-manager-policy="none" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445922 4917 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445926 4917 flags.go:64] FLAG: --topology-manager-scope="container" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445930 4917 flags.go:64] FLAG: --v="2" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445936 4917 flags.go:64] FLAG: --version="false" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445941 4917 flags.go:64] FLAG: --vmodule="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445946 4917 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.445950 4917 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446052 4917 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446057 4917 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446061 4917 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446065 4917 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446069 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446074 4917 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446078 4917 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446081 4917 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446085 4917 feature_gate.go:330] unrecognized feature gate: Example Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446088 4917 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446093 4917 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446097 4917 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446100 4917 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446104 4917 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446107 4917 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446111 4917 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446114 4917 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446118 4917 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446121 4917 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446125 4917 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446128 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446132 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446136 4917 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446141 4917 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446144 4917 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446148 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446153 4917 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446158 4917 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446162 4917 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446165 4917 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446169 4917 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446172 4917 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446176 4917 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446179 4917 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446183 4917 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446186 4917 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446190 4917 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446194 4917 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446197 4917 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446201 4917 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446204 4917 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446208 4917 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446213 4917 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446216 4917 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446220 4917 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446223 4917 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446228 4917 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446232 4917 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446236 4917 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446239 4917 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446243 4917 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446246 4917 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446250 4917 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446253 4917 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446258 4917 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446264 4917 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446268 4917 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446272 4917 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446276 4917 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446280 4917 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446284 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446288 4917 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446291 4917 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446296 4917 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446300 4917 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446304 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446308 4917 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446311 4917 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446315 4917 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446318 4917 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.446322 4917 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.446335 4917 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.459110 4917 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.459176 4917 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459296 4917 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459309 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459316 4917 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459322 4917 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459329 4917 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459336 4917 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459347 4917 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459353 4917 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459358 4917 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459363 4917 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459368 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459374 4917 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459379 4917 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459383 4917 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459388 4917 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459393 4917 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459398 4917 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459403 4917 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459411 4917 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459415 4917 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459420 4917 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459428 4917 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459433 4917 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459437 4917 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459442 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459448 4917 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459453 4917 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459459 4917 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459463 4917 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459468 4917 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459472 4917 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459477 4917 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459482 4917 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459488 4917 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459493 4917 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459497 4917 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459502 4917 feature_gate.go:330] unrecognized feature gate: Example Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459506 4917 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459510 4917 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459515 4917 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459520 4917 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459524 4917 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459528 4917 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459533 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459539 4917 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459546 4917 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459550 4917 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459558 4917 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459564 4917 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459570 4917 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459575 4917 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459581 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459586 4917 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459591 4917 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459595 4917 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459599 4917 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459603 4917 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459607 4917 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459611 4917 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459616 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459620 4917 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459624 4917 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459628 4917 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459632 4917 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459638 4917 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459668 4917 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459673 4917 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459678 4917 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459683 4917 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459688 4917 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459693 4917 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.459703 4917 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459905 4917 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459916 4917 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459921 4917 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459926 4917 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459931 4917 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459935 4917 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459940 4917 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459945 4917 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459950 4917 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459955 4917 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459959 4917 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459964 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459970 4917 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459976 4917 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459982 4917 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459986 4917 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459991 4917 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.459996 4917 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460000 4917 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460004 4917 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460009 4917 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460013 4917 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460017 4917 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460022 4917 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460026 4917 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460031 4917 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460037 4917 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460042 4917 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460046 4917 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460051 4917 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460058 4917 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460063 4917 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460067 4917 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460071 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460076 4917 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460080 4917 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460084 4917 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460088 4917 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460092 4917 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460096 4917 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460101 4917 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460106 4917 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460111 4917 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460115 4917 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460120 4917 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460124 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460129 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460133 4917 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460137 4917 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460142 4917 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460147 4917 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460152 4917 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460156 4917 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460160 4917 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460165 4917 feature_gate.go:330] unrecognized feature gate: Example Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460169 4917 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460174 4917 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460182 4917 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460187 4917 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460191 4917 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460196 4917 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460200 4917 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460206 4917 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460210 4917 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460215 4917 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460219 4917 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460224 4917 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460229 4917 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460234 4917 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460239 4917 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.460244 4917 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.460252 4917 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.460522 4917 server.go:940] "Client rotation is on, will bootstrap in background" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.463749 4917 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.463869 4917 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.464533 4917 server.go:997] "Starting client certificate rotation" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.464560 4917 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.464785 4917 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-29 12:51:10.710793703 +0000 UTC Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.464910 4917 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 420h45m5.245886464s for next certificate rotation Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.473484 4917 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.475350 4917 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.482779 4917 log.go:25] "Validated CRI v1 runtime API" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.505634 4917 log.go:25] "Validated CRI v1 image API" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.507910 4917 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.511402 4917 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-12-00-01-25-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.511449 4917 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.525904 4917 manager.go:217] Machine: {Timestamp:2025-12-12 00:06:05.52384635 +0000 UTC m=+0.301647183 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3860a222-2102-46c2-9063-9861157893b4 BootID:153f6872-46ff-42ea-b410-996e545902e8 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9b:9e:ae Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9b:9e:ae Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d1:8a:57 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c5:ee:da Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:28:35:11 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:12:4e:3d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:d3:f1:46:ed:2d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:60:3d:0c:ac:71 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.526172 4917 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.526455 4917 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.527140 4917 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.527329 4917 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.527365 4917 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.527606 4917 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.527617 4917 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.527960 4917 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.528002 4917 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.528324 4917 state_mem.go:36] "Initialized new in-memory state store" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.528438 4917 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.529141 4917 kubelet.go:418] "Attempting to sync node with API server" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.529164 4917 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.529186 4917 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.529201 4917 kubelet.go:324] "Adding apiserver pod source" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.530214 4917 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.535575 4917 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.535709 4917 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.536033 4917 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.536933 4917 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.536963 4917 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.537631 4917 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.539930 4917 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540413 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540436 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540443 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540450 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540461 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540468 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540475 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540488 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540497 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540506 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540516 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.540523 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.542120 4917 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.542736 4917 server.go:1280] "Started kubelet" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.543305 4917 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.543603 4917 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.543912 4917 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.544206 4917 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 00:06:05 crc systemd[1]: Started Kubernetes Kubelet. Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.545601 4917 server.go:460] "Adding debug handlers to kubelet server" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.545702 4917 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.545795 4917 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.545936 4917 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.545959 4917 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.546125 4917 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.545888 4917 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:07:05.87276906 +0000 UTC Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.546705 4917 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 861h1m0.32607234s for next certificate rotation Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.546958 4917 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.546567 4917 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.15:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18804f0bafd8da44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 00:06:05.542693444 +0000 UTC m=+0.320494257,LastTimestamp:2025-12-12 00:06:05.542693444 +0000 UTC m=+0.320494257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.547240 4917 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.547468 4917 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.547675 4917 factory.go:55] Registering systemd factory Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.547704 4917 factory.go:221] Registration of the systemd container factory successfully Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.547676 4917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="200ms" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.548142 4917 factory.go:153] Registering CRI-O factory Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.548159 4917 factory.go:221] Registration of the crio container factory successfully Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.548214 4917 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.548237 4917 factory.go:103] Registering Raw factory Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.548252 4917 manager.go:1196] Started watching for new ooms in manager Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.548871 4917 manager.go:319] Starting recovery of all containers Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558017 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558459 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558475 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558488 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558501 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558513 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558522 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558532 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558546 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558555 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558567 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558581 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558593 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558608 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558619 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558629 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558665 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558687 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558705 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558730 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558748 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558763 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558777 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558793 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558805 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558816 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558830 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558869 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558885 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558899 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558912 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558926 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558959 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558969 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558980 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.558991 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559007 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559020 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559033 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559046 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559057 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559069 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559081 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559094 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559107 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559119 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559155 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559167 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559184 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559197 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559210 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559225 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559271 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559286 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559299 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559315 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559328 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559342 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559355 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559367 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559381 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559394 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559408 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559421 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559433 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559444 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559454 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559466 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559512 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559529 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559543 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559558 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559572 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559585 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559597 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559607 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559618 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559629 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559656 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559673 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559689 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559707 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559723 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559736 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559747 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559763 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559781 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559795 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559811 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559828 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559842 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559857 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559871 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559885 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559899 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559935 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559947 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559958 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559971 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559982 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.559992 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.560004 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.560016 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.560028 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.560045 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.560058 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.560071 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.560082 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562039 4917 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562074 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562092 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562105 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562117 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562131 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562146 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562159 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562170 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562180 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562192 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562205 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562216 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562227 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562240 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562252 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562265 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.562279 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563332 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563411 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563434 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563452 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563491 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563511 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563526 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563540 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563556 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563572 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563595 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563613 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563656 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563671 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563691 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563708 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563724 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563742 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563759 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563774 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563792 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563809 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563829 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563845 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563862 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563880 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563897 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563913 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563932 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563948 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563966 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.563982 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564000 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564017 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564036 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564053 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564070 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564088 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564104 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564134 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564152 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564168 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564187 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564202 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564219 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564237 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564257 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564274 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564295 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564311 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564328 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564342 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564360 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564376 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564391 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564405 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564421 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564437 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564453 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564473 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564488 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564506 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564527 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564544 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564561 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564578 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564595 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564611 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564629 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564668 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564686 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564703 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564720 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564738 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564757 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564777 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564796 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564817 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564836 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564860 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564881 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564897 4917 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564913 4917 reconstruct.go:97] "Volume reconstruction finished" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.564924 4917 reconciler.go:26] "Reconciler: start to sync state" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.568218 4917 manager.go:324] Recovery completed Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.577997 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.579980 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.580032 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.580043 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.582144 4917 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.582158 4917 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.582198 4917 state_mem.go:36] "Initialized new in-memory state store" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.595753 4917 policy_none.go:49] "None policy: Start" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.597954 4917 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.597988 4917 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.598100 4917 state_mem.go:35] "Initializing new in-memory state store" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.600577 4917 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.600627 4917 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.600674 4917 kubelet.go:2335] "Starting kubelet main sync loop" Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.600744 4917 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 00:06:05 crc kubenswrapper[4917]: W1212 00:06:05.601349 4917 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.601420 4917 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.647337 4917 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.671895 4917 manager.go:334] "Starting Device Plugin manager" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.671959 4917 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.671986 4917 server.go:79] "Starting device plugin registration server" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.672494 4917 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.672519 4917 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.672816 4917 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.672908 4917 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.672922 4917 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.683409 4917 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.701735 4917 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.701885 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.703383 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.703429 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.703445 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.703591 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.703895 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.704004 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.704392 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.704526 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.704612 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.704902 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.705119 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.705236 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.705479 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.705511 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.705523 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.706262 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.706293 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.706305 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.706434 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.706685 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.706754 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.706828 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.706857 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.706868 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.707303 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.707346 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.707363 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.707713 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.707753 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.707782 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.708206 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.708234 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.708243 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.708782 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.708838 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.708782 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.708884 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.708906 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.708856 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.709177 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.709227 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.710011 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.710032 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.710041 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.748294 4917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="400ms" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.768410 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.768481 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.768521 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.768553 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.768584 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.768634 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.768691 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.769044 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.769116 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.769151 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.769177 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.769251 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.769295 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.769321 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.769342 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.772903 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.774621 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.774707 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.774726 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.774761 4917 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.775474 4917 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.15:6443: connect: connection refused" node="crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870449 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870512 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870535 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870557 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870572 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870591 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870608 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870626 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870666 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870682 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870698 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870712 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870760 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870778 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870777 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870796 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870798 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870862 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870899 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870931 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870981 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870955 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870981 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.870951 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.871044 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.871096 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.871061 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.871095 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.871117 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.871057 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.904065 4917 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.15:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18804f0bafd8da44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 00:06:05.542693444 +0000 UTC m=+0.320494257,LastTimestamp:2025-12-12 00:06:05.542693444 +0000 UTC m=+0.320494257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.976263 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.977815 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.977877 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.977893 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:05 crc kubenswrapper[4917]: I1212 00:06:05.977931 4917 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 00:06:05 crc kubenswrapper[4917]: E1212 00:06:05.978534 4917 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.15:6443: connect: connection refused" node="crc" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.043059 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.051439 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 00:06:06 crc kubenswrapper[4917]: W1212 00:06:06.077445 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a1af40712134f42ecba8a2f30360a185b3d6bcc5a3151ac84748364bac80f1b5 WatchSource:0}: Error finding container a1af40712134f42ecba8a2f30360a185b3d6bcc5a3151ac84748364bac80f1b5: Status 404 returned error can't find the container with id a1af40712134f42ecba8a2f30360a185b3d6bcc5a3151ac84748364bac80f1b5 Dec 12 00:06:06 crc kubenswrapper[4917]: W1212 00:06:06.080892 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-94419888517a00d0096f6d8489340b24e28ca8c0896e13982d23a2d18f2c085c WatchSource:0}: Error finding container 94419888517a00d0096f6d8489340b24e28ca8c0896e13982d23a2d18f2c085c: Status 404 returned error can't find the container with id 94419888517a00d0096f6d8489340b24e28ca8c0896e13982d23a2d18f2c085c Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.081304 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.104804 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.113461 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 00:06:06 crc kubenswrapper[4917]: W1212 00:06:06.117544 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ba48cf804afcbd48361625cc51aaa9d1b53b4a0f0175e5922b40688d306c3608 WatchSource:0}: Error finding container ba48cf804afcbd48361625cc51aaa9d1b53b4a0f0175e5922b40688d306c3608: Status 404 returned error can't find the container with id ba48cf804afcbd48361625cc51aaa9d1b53b4a0f0175e5922b40688d306c3608 Dec 12 00:06:06 crc kubenswrapper[4917]: W1212 00:06:06.130805 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-685014089cc7870454457b8c9287f65ccc245e332350a7eb63000c7d4507485f WatchSource:0}: Error finding container 685014089cc7870454457b8c9287f65ccc245e332350a7eb63000c7d4507485f: Status 404 returned error can't find the container with id 685014089cc7870454457b8c9287f65ccc245e332350a7eb63000c7d4507485f Dec 12 00:06:06 crc kubenswrapper[4917]: E1212 00:06:06.149983 4917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="800ms" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.379400 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.382074 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.382140 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.382156 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.382197 4917 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 00:06:06 crc kubenswrapper[4917]: E1212 00:06:06.382916 4917 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.15:6443: connect: connection refused" node="crc" Dec 12 00:06:06 crc kubenswrapper[4917]: W1212 00:06:06.434470 4917 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Dec 12 00:06:06 crc kubenswrapper[4917]: E1212 00:06:06.434592 4917 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Dec 12 00:06:06 crc kubenswrapper[4917]: W1212 00:06:06.474801 4917 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Dec 12 00:06:06 crc kubenswrapper[4917]: E1212 00:06:06.474901 4917 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.545187 4917 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.607540 4917 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667" exitCode=0 Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.607636 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667"} Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.607810 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"20346c517ce2d4815df120930ad3d722abf3738e2d52658f871f71890b72763a"} Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.607957 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.608956 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.608987 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.608999 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.610033 4917 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3c0bfd3a18c72a0be214ef2f11fe764bd88d87e6f65d6a021c6865727e614d12" exitCode=0 Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.610104 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3c0bfd3a18c72a0be214ef2f11fe764bd88d87e6f65d6a021c6865727e614d12"} Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.610135 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a1af40712134f42ecba8a2f30360a185b3d6bcc5a3151ac84748364bac80f1b5"} Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.610366 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.610787 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.612027 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.612068 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.612083 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.612349 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.612399 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.612437 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.612459 4917 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3f1b39a06416858eab72e0bd88b65da74a16c8a46029841452ee888b039a127d" exitCode=0 Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.612530 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3f1b39a06416858eab72e0bd88b65da74a16c8a46029841452ee888b039a127d"} Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.612598 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"94419888517a00d0096f6d8489340b24e28ca8c0896e13982d23a2d18f2c085c"} Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.612748 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.614048 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.614088 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.614101 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.615369 4917 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954" exitCode=0 Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.615419 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954"} Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.615482 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"685014089cc7870454457b8c9287f65ccc245e332350a7eb63000c7d4507485f"} Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.615616 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.616783 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.616825 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.616853 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.620801 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235"} Dec 12 00:06:06 crc kubenswrapper[4917]: I1212 00:06:06.620858 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba48cf804afcbd48361625cc51aaa9d1b53b4a0f0175e5922b40688d306c3608"} Dec 12 00:06:06 crc kubenswrapper[4917]: W1212 00:06:06.628850 4917 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Dec 12 00:06:06 crc kubenswrapper[4917]: E1212 00:06:06.628956 4917 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Dec 12 00:06:06 crc kubenswrapper[4917]: W1212 00:06:06.650461 4917 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Dec 12 00:06:06 crc kubenswrapper[4917]: E1212 00:06:06.650590 4917 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.15:6443: connect: connection refused" logger="UnhandledError" Dec 12 00:06:06 crc kubenswrapper[4917]: E1212 00:06:06.953301 4917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="1.6s" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.184035 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.188495 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.188579 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.188594 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.188627 4917 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 00:06:07 crc kubenswrapper[4917]: E1212 00:06:07.189366 4917 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.15:6443: connect: connection refused" node="crc" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.545457 4917 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.15:6443: connect: connection refused Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.636102 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"17ac02655901d9d729b6cf2cccf17ed4104f1d1f568d813c68920686068db586"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.636199 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bf369a1cb2eaf2ca16299cd6a8e314ae2693ede79e120eaae657dcfc7c1629c8"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.636212 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"208db3014d28fbd4ad947b0bb07b8e2cdd07a9a42923f12a89dafbb482228861"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.636343 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.638077 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.638113 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.638126 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.642147 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.642188 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.642202 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.642728 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.644488 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.644558 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.644569 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.645959 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.646034 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.646051 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.646067 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.653579 4917 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7ea6d1f2a58a7452e54ed0ef0b4409a20d4b3fe44fe176b0a8d52602a2f972be" exitCode=0 Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.653679 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7ea6d1f2a58a7452e54ed0ef0b4409a20d4b3fe44fe176b0a8d52602a2f972be"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.653885 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.655006 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.655049 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.655064 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.657121 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"048e4e3321ff297e044bfa5ac97a862037024980439ac4c475215dc578c4b542"} Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.657242 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.658928 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.658998 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:07 crc kubenswrapper[4917]: I1212 00:06:07.659010 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.662854 4917 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="debe388b959f90a964de31373a32ce4a671b7d8e7ebc061cf9de7723e1856ebd" exitCode=0 Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.662925 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"debe388b959f90a964de31373a32ce4a671b7d8e7ebc061cf9de7723e1856ebd"} Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.663117 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.664316 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.664347 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.664357 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.666796 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.667277 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.667291 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308"} Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.668122 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.668171 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.668182 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.668135 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.668400 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.668434 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.789851 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.791315 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.791361 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.791377 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:08 crc kubenswrapper[4917]: I1212 00:06:08.791411 4917 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.678089 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a56ed1ba84a34f4c73c146f16b267782610e835847596d4a4795cf0ed3144001"} Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.678150 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6ee374fe9b945813c4b4c4d0c4520666a3143d70cb53f579b58ae306ea56587f"} Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.678167 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"02716ba394064a78d2f67241000e581a3b57a77d349ebe84c178d56763575595"} Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.678180 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8447e16d8e6aa7cd7f49dcdb9139cf311ade5aac1561dd894f1040cc0d352ca4"} Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.678191 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c58f5a302aac53beaefd1b6e85f1ab6bd511fa08841118a9b55149065aee47aa"} Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.678204 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.678291 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.678367 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.679195 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.679236 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.679251 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.679318 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.679354 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.679367 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.722031 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.722207 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.723417 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.723448 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.723458 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.733017 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.733247 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.734580 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.734666 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:09 crc kubenswrapper[4917]: I1212 00:06:09.734685 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:10 crc kubenswrapper[4917]: I1212 00:06:10.584020 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 12 00:06:10 crc kubenswrapper[4917]: I1212 00:06:10.637725 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:10 crc kubenswrapper[4917]: I1212 00:06:10.680103 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:10 crc kubenswrapper[4917]: I1212 00:06:10.680159 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:10 crc kubenswrapper[4917]: I1212 00:06:10.681332 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:10 crc kubenswrapper[4917]: I1212 00:06:10.681391 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:10 crc kubenswrapper[4917]: I1212 00:06:10.681407 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:10 crc kubenswrapper[4917]: I1212 00:06:10.681415 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:10 crc kubenswrapper[4917]: I1212 00:06:10.681475 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:10 crc kubenswrapper[4917]: I1212 00:06:10.681494 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:11 crc kubenswrapper[4917]: I1212 00:06:11.682711 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:11 crc kubenswrapper[4917]: I1212 00:06:11.684131 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:11 crc kubenswrapper[4917]: I1212 00:06:11.684175 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:11 crc kubenswrapper[4917]: I1212 00:06:11.684184 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:12 crc kubenswrapper[4917]: I1212 00:06:12.882293 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:12 crc kubenswrapper[4917]: I1212 00:06:12.882514 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:12 crc kubenswrapper[4917]: I1212 00:06:12.884130 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:12 crc kubenswrapper[4917]: I1212 00:06:12.884169 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:12 crc kubenswrapper[4917]: I1212 00:06:12.884184 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:12 crc kubenswrapper[4917]: I1212 00:06:12.976987 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 12 00:06:12 crc kubenswrapper[4917]: I1212 00:06:12.977227 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:12 crc kubenswrapper[4917]: I1212 00:06:12.978812 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:12 crc kubenswrapper[4917]: I1212 00:06:12.978854 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:12 crc kubenswrapper[4917]: I1212 00:06:12.978870 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:13 crc kubenswrapper[4917]: I1212 00:06:13.513073 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:13 crc kubenswrapper[4917]: I1212 00:06:13.513297 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:13 crc kubenswrapper[4917]: I1212 00:06:13.515190 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:13 crc kubenswrapper[4917]: I1212 00:06:13.515262 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:13 crc kubenswrapper[4917]: I1212 00:06:13.515281 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:13 crc kubenswrapper[4917]: I1212 00:06:13.985256 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:13 crc kubenswrapper[4917]: I1212 00:06:13.985470 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:13 crc kubenswrapper[4917]: I1212 00:06:13.987089 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:13 crc kubenswrapper[4917]: I1212 00:06:13.987141 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:13 crc kubenswrapper[4917]: I1212 00:06:13.987160 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:13 crc kubenswrapper[4917]: I1212 00:06:13.992225 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:14 crc kubenswrapper[4917]: I1212 00:06:14.692522 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:14 crc kubenswrapper[4917]: I1212 00:06:14.694229 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:14 crc kubenswrapper[4917]: I1212 00:06:14.694271 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:14 crc kubenswrapper[4917]: I1212 00:06:14.694284 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:15 crc kubenswrapper[4917]: E1212 00:06:15.684409 4917 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 00:06:16 crc kubenswrapper[4917]: I1212 00:06:16.697347 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:16 crc kubenswrapper[4917]: I1212 00:06:16.697705 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:16 crc kubenswrapper[4917]: I1212 00:06:16.699424 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:16 crc kubenswrapper[4917]: I1212 00:06:16.699476 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:16 crc kubenswrapper[4917]: I1212 00:06:16.699490 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:16 crc kubenswrapper[4917]: I1212 00:06:16.707665 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:17 crc kubenswrapper[4917]: I1212 00:06:17.702346 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:17 crc kubenswrapper[4917]: I1212 00:06:17.703318 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:17 crc kubenswrapper[4917]: I1212 00:06:17.703349 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:17 crc kubenswrapper[4917]: I1212 00:06:17.703359 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:17 crc kubenswrapper[4917]: I1212 00:06:17.831732 4917 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 00:06:17 crc kubenswrapper[4917]: I1212 00:06:17.831820 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 00:06:17 crc kubenswrapper[4917]: I1212 00:06:17.956586 4917 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 00:06:17 crc kubenswrapper[4917]: I1212 00:06:17.956705 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 00:06:17 crc kubenswrapper[4917]: I1212 00:06:17.964789 4917 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 00:06:17 crc kubenswrapper[4917]: I1212 00:06:17.964877 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 00:06:19 crc kubenswrapper[4917]: I1212 00:06:19.697912 4917 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 00:06:19 crc kubenswrapper[4917]: I1212 00:06:19.698009 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.612972 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.613186 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.614340 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.614382 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.614396 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.627101 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.674079 4917 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.674177 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.710415 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.711442 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.711483 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:20 crc kubenswrapper[4917]: I1212 00:06:20.711496 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.889727 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.889979 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.890511 4917 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.890601 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.891899 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.891933 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.891949 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.894688 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:22 crc kubenswrapper[4917]: E1212 00:06:22.963148 4917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.965362 4917 trace.go:236] Trace[2087763386]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 00:06:09.023) (total time: 13941ms): Dec 12 00:06:22 crc kubenswrapper[4917]: Trace[2087763386]: ---"Objects listed" error: 13941ms (00:06:22.965) Dec 12 00:06:22 crc kubenswrapper[4917]: Trace[2087763386]: [13.941394357s] [13.941394357s] END Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.965396 4917 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.966103 4917 trace.go:236] Trace[1350582855]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 00:06:09.241) (total time: 13724ms): Dec 12 00:06:22 crc kubenswrapper[4917]: Trace[1350582855]: ---"Objects listed" error: 13724ms (00:06:22.966) Dec 12 00:06:22 crc kubenswrapper[4917]: Trace[1350582855]: [13.724270213s] [13.724270213s] END Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.966131 4917 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.966898 4917 trace.go:236] Trace[516486259]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 00:06:08.586) (total time: 14379ms): Dec 12 00:06:22 crc kubenswrapper[4917]: Trace[516486259]: ---"Objects listed" error: 14379ms (00:06:22.966) Dec 12 00:06:22 crc kubenswrapper[4917]: Trace[516486259]: [14.379925582s] [14.379925582s] END Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.966930 4917 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.967902 4917 trace.go:236] Trace[324614128]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 00:06:08.838) (total time: 14129ms): Dec 12 00:06:22 crc kubenswrapper[4917]: Trace[324614128]: ---"Objects listed" error: 14129ms (00:06:22.967) Dec 12 00:06:22 crc kubenswrapper[4917]: Trace[324614128]: [14.12926368s] [14.12926368s] END Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.967927 4917 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 12 00:06:22 crc kubenswrapper[4917]: I1212 00:06:22.968423 4917 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 12 00:06:22 crc kubenswrapper[4917]: E1212 00:06:22.968957 4917 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.544154 4917 apiserver.go:52] "Watching apiserver" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.546542 4917 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.546867 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.547244 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.547338 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.547392 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.547450 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.547452 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.548015 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.547569 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.548101 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.548218 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.551242 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.551408 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.551568 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.551632 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.551664 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.551590 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.551818 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.551911 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.553845 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.578014 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.592383 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.613100 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.626692 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.638614 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.646758 4917 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.654943 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.666738 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.673593 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.673749 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.673850 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.673967 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674046 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674123 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674195 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674266 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674342 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674409 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674481 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674552 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674618 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674715 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674795 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674868 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674935 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675022 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675099 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675165 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675236 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675300 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675371 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675446 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675516 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675590 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675696 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675802 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675878 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675998 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676081 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676161 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676232 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676312 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676391 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676459 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676526 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674141 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674168 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674361 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674496 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674733 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676686 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.674974 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675238 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675368 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675425 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675478 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675493 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675503 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675516 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675669 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675728 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675768 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675922 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676797 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676842 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.675995 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676022 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676029 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676055 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676078 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676160 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676203 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676222 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676242 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676332 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676347 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676465 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676458 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676665 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677093 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.676593 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677246 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677249 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677279 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677309 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677336 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677362 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677386 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677407 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677436 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677440 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677471 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677493 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677518 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677546 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677572 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677583 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677601 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677622 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677645 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677656 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677770 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.677684 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678049 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678054 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678073 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678109 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678128 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678222 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678241 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678269 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678288 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678306 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678324 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678362 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678394 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678421 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678445 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678471 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678492 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678514 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678532 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678550 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678566 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678583 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678598 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678615 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678632 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678658 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678693 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678710 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678742 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678773 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678796 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678816 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678834 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678854 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678875 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678894 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678911 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678928 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678946 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678963 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678979 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678995 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679011 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679030 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679049 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679066 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679084 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679100 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679116 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679139 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679157 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679175 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678320 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678335 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678571 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678593 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678608 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678844 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.678942 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679044 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679068 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679085 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.679203 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:06:24.17917677 +0000 UTC m=+18.956977583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679522 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679558 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679588 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679602 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679611 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679720 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679749 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679773 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679796 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679818 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679827 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679954 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680131 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680190 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680323 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679296 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680365 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680417 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680463 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680582 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680619 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680657 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680713 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680824 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.680963 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.681145 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.681990 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.681986 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.682134 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683168 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683192 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683225 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683384 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.679837 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683578 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683661 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683625 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683735 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683781 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683887 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683913 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683934 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.683951 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684194 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684296 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684507 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684710 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684716 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684841 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684869 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684942 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684970 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684988 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685036 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685059 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685082 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685119 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685141 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685163 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685211 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685230 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685250 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685272 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685311 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685334 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685373 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685394 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685416 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685433 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685478 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685500 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685520 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685567 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685589 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685677 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685699 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685718 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685755 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685776 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685798 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685842 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685861 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685880 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684942 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685923 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.684946 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685947 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685968 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686015 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686038 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686058 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686094 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686115 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686136 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686197 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686215 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686232 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686250 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686433 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686463 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686483 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686580 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686600 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686620 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686994 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687015 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687065 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687084 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687100 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687121 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687613 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687636 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687676 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687694 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687712 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687728 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687894 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687913 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687930 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687973 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687991 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688011 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688030 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688046 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688063 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688083 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688205 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688235 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688311 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688380 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688407 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688461 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688493 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688537 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688559 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688581 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688623 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689065 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689099 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689153 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689298 4917 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689316 4917 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689358 4917 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689370 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689382 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689393 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689403 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689417 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689461 4917 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689471 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689484 4917 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689498 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689550 4917 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689565 4917 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689574 4917 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689587 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689597 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689735 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689751 4917 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689762 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689771 4917 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689780 4917 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689790 4917 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689919 4917 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689931 4917 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689942 4917 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689951 4917 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689964 4917 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690080 4917 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690096 4917 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690106 4917 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690116 4917 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690127 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690205 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690220 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690229 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690239 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690251 4917 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685075 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690340 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685255 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685322 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685389 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685503 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685845 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.685980 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686523 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686793 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686891 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.686934 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687205 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687236 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687279 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687289 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687295 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687368 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687548 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687577 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.687716 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688189 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688236 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688396 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688423 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.688907 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689059 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689145 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689211 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689338 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689424 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689736 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690447 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690550 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690689 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690927 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690950 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689796 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689801 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689999 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690042 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691133 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691235 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691303 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.690264 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691490 4917 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691579 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691590 4917 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691601 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691611 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691621 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691692 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691706 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691718 4917 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691728 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691738 4917 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691748 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691761 4917 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691774 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691788 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691874 4917 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691889 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691942 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692951 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692971 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692985 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692999 4917 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.693013 4917 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.693028 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.693045 4917 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.693063 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.693081 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.693105 4917 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.693125 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.693143 4917 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.693163 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.689791 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691621 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691704 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691698 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.691856 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692010 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692257 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692275 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692314 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692379 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692399 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692476 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692481 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692617 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.692827 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.693617 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.693697 4917 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.693761 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.693784 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:24.193756738 +0000 UTC m=+18.971557551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.693896 4917 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694018 4917 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694067 4917 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694429 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.694103 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:24.194074885 +0000 UTC m=+18.971875938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694387 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694475 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694564 4917 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694763 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694788 4917 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694806 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694798 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694833 4917 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694849 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694866 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694879 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694892 4917 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694905 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694918 4917 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694932 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694947 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694953 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694962 4917 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.694994 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.695409 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.695479 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.695871 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.696116 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.697029 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.698995 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.699015 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.699027 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.699135 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.699381 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.699420 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.699514 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.699594 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.700706 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.706899 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.706939 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.707198 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.707832 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.707944 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.708265 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.708307 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.708327 4917 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.708408 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:24.208381076 +0000 UTC m=+18.986182089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.708737 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.709123 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.709250 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.709274 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.709276 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.709343 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.709474 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.711467 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.711991 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.712633 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.713815 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.713851 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.714784 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.715027 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.715958 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.716089 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.716096 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.716718 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.716757 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.716777 4917 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:23 crc kubenswrapper[4917]: E1212 00:06:23.716840 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:24.216818519 +0000 UTC m=+18.994619332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.718168 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.718161 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.719714 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.720235 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.722994 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.724782 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.724924 4917 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308" exitCode=255 Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.724986 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308"} Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.725019 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.725437 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.725456 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.725491 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.728814 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.739752 4917 scope.go:117] "RemoveContainer" containerID="cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.740528 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.741562 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.746825 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.747841 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.751142 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.755579 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.756571 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.766283 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.776760 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.787442 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796399 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796463 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796523 4917 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796535 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796545 4917 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796554 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796565 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796574 4917 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796598 4917 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796607 4917 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796588 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796615 4917 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796764 4917 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796784 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796799 4917 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796814 4917 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796821 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796830 4917 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796849 4917 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796865 4917 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796880 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796893 4917 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796905 4917 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796917 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796931 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796943 4917 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796955 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796968 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796979 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.796991 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797004 4917 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797017 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797029 4917 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797040 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797052 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797064 4917 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797075 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797086 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797097 4917 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797109 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797120 4917 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797132 4917 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797144 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797156 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797167 4917 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797178 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797193 4917 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797207 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797218 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797230 4917 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797242 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797254 4917 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797265 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797278 4917 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797291 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797305 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797318 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797330 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797343 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797354 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797366 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797378 4917 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797391 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797404 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797417 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797430 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797444 4917 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797455 4917 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797466 4917 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797477 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797489 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797500 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797512 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797523 4917 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797534 4917 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797545 4917 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797556 4917 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797566 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797577 4917 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797590 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797603 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797615 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797626 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797637 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797653 4917 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797668 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797716 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797729 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797740 4917 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797749 4917 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797759 4917 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797769 4917 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797778 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797788 4917 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797801 4917 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797812 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797824 4917 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797836 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797848 4917 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797860 4917 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797872 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797883 4917 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797895 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797906 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797919 4917 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797930 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797941 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797952 4917 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797962 4917 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797973 4917 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.797987 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.798093 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.860694 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.868731 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 00:06:23 crc kubenswrapper[4917]: I1212 00:06:23.874685 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 00:06:23 crc kubenswrapper[4917]: W1212 00:06:23.881182 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-1c14618e8eee4c859cc301e73cc06853b7581bb3f91340032b221ae635dc1fde WatchSource:0}: Error finding container 1c14618e8eee4c859cc301e73cc06853b7581bb3f91340032b221ae635dc1fde: Status 404 returned error can't find the container with id 1c14618e8eee4c859cc301e73cc06853b7581bb3f91340032b221ae635dc1fde Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.203592 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.203919 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:06:25.203869983 +0000 UTC m=+19.981670836 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.204318 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.204364 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.204544 4917 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.204571 4917 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.204677 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:25.204629803 +0000 UTC m=+19.982430656 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.205283 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:25.205255508 +0000 UTC m=+19.983056331 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.305693 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.305783 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.305962 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.305984 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.305979 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.306042 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.306002 4917 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.306076 4917 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.306132 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:25.306110302 +0000 UTC m=+20.083911115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:24 crc kubenswrapper[4917]: E1212 00:06:24.306158 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:25.306147903 +0000 UTC m=+20.083948726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.731433 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6"} Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.731494 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55"} Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.731506 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1c14618e8eee4c859cc301e73cc06853b7581bb3f91340032b221ae635dc1fde"} Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.736072 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69"} Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.736799 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b79a278fd1b89cd791710333d9f1d724d82361f642e160c4c941c4328bf7200f"} Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.738202 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.740418 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2"} Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.740617 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.741627 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"94900cee4cf7da8f3fd8c0cfe7c71704b1e3078ea00eca86bf3a01e1d62ddf55"} Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.750324 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.763206 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.781550 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.800499 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.817292 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.831555 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.852340 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.865232 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.883780 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.899598 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.913228 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.925784 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.944912 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:24 crc kubenswrapper[4917]: I1212 00:06:24.963675 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.214042 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.214145 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.214184 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.214501 4917 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.214537 4917 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.214619 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:27.214589765 +0000 UTC m=+21.992390908 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.214646 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:27.214634126 +0000 UTC m=+21.992435189 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.215477 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:06:27.215438266 +0000 UTC m=+21.993239099 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.315188 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.315316 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.315363 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.315391 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.315405 4917 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.315471 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:27.315451718 +0000 UTC m=+22.093252531 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.315493 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.315527 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.315550 4917 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.315619 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:27.315594912 +0000 UTC m=+22.093395765 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.601110 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.601148 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.601294 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.601393 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.601131 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:25 crc kubenswrapper[4917]: E1212 00:06:25.601930 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.609138 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.610089 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.611727 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.612714 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.614330 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.615084 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.616015 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.617703 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.618690 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.619943 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.620300 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.620477 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.621309 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.621908 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.622526 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.623224 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.623912 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.624599 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.625086 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.625787 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.626495 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.627054 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.627721 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.628223 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.629312 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.630277 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.631228 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.632107 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.632837 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.633593 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.634217 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.634872 4917 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.635011 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.635945 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.636847 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.637492 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.638065 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.641361 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.643106 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.643629 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.644350 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.645076 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.645545 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.646156 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.646791 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.647400 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.647882 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.648428 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.649030 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.649756 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.650202 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.650626 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.651115 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.654334 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.655973 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.656850 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.657518 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.671320 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.693440 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.712765 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:25 crc kubenswrapper[4917]: I1212 00:06:25.727973 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.169716 4917 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.172155 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.172246 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.172260 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.172318 4917 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.180223 4917 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.180602 4917 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.181976 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.182012 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.182027 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.182045 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.182058 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: E1212 00:06:26.204731 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.209559 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.209601 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.209617 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.209666 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.209691 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: E1212 00:06:26.227936 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.231703 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.231736 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.231746 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.231762 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.231771 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: E1212 00:06:26.248044 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.254537 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.254583 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.254598 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.254619 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.254635 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: E1212 00:06:26.300947 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.306337 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.306382 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.306395 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.306415 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.306426 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: E1212 00:06:26.333444 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: E1212 00:06:26.333600 4917 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.335214 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.335251 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.335262 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.335277 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.335286 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.438804 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.439261 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.439454 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.439697 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.439891 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.542269 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.542329 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.542349 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.542374 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.542392 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.645924 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.646026 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.646049 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.646075 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.646093 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.703997 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.709554 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.717549 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.724869 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.741596 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.749684 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.749734 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.749748 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.749770 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.749787 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.761120 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.775152 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.788457 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.806133 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.824622 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.852422 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.853105 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.853148 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.853164 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.853183 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.853194 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.873226 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.889096 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.903478 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.918883 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.934638 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.948266 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.955353 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.955385 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.955393 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.955409 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.955419 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:26Z","lastTransitionTime":"2025-12-12T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:26 crc kubenswrapper[4917]: I1212 00:06:26.960027 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.061499 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.061567 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.061580 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.061597 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.061608 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:27Z","lastTransitionTime":"2025-12-12T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.164676 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.164735 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.164747 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.164767 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.164780 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:27Z","lastTransitionTime":"2025-12-12T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.234785 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.234879 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.234920 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.234952 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:06:31.23492725 +0000 UTC m=+26.012728063 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.235057 4917 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.235202 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:31.235167086 +0000 UTC m=+26.012968039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.235065 4917 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.235290 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:31.235272778 +0000 UTC m=+26.013073761 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.267160 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.267204 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.267213 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.267228 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.267239 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:27Z","lastTransitionTime":"2025-12-12T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.336037 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.336153 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.336188 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.336202 4917 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.336213 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.336266 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:31.336243166 +0000 UTC m=+26.114044179 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.336411 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.336434 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.336448 4917 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.336526 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:31.336503432 +0000 UTC m=+26.114304245 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.370341 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.370426 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.370443 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.370462 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.370472 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:27Z","lastTransitionTime":"2025-12-12T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.473533 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.473598 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.473613 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.473637 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.473676 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:27Z","lastTransitionTime":"2025-12-12T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.576165 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.576213 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.576224 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.576245 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.576255 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:27Z","lastTransitionTime":"2025-12-12T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.601698 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.601707 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.601892 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.601942 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.602075 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:27 crc kubenswrapper[4917]: E1212 00:06:27.602156 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.679639 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.679734 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.679755 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.679778 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.679793 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:27Z","lastTransitionTime":"2025-12-12T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.753670 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e"} Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.773812 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.782347 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.782387 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.782402 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.782422 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.782436 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:27Z","lastTransitionTime":"2025-12-12T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.792430 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.810897 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.829549 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.848617 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.880255 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.885962 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.886008 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.886019 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.886037 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.886048 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:27Z","lastTransitionTime":"2025-12-12T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.901022 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.917951 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.988810 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.988895 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.988920 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.988955 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:27 crc kubenswrapper[4917]: I1212 00:06:27.988998 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:27Z","lastTransitionTime":"2025-12-12T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.091757 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.091824 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.091849 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.091879 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.091902 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:28Z","lastTransitionTime":"2025-12-12T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.194753 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.194788 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.194799 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.194813 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.194822 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:28Z","lastTransitionTime":"2025-12-12T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.297823 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.297864 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.297878 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.297895 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.297906 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:28Z","lastTransitionTime":"2025-12-12T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.400775 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.400847 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.400862 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.400884 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.400902 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:28Z","lastTransitionTime":"2025-12-12T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.504111 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.504197 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.504218 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.504572 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.504873 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:28Z","lastTransitionTime":"2025-12-12T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.608327 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.608382 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.608398 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.608423 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.608456 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:28Z","lastTransitionTime":"2025-12-12T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.711463 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.711524 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.711538 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.711558 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.711569 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:28Z","lastTransitionTime":"2025-12-12T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.814627 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.814712 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.814729 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.814756 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.814775 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:28Z","lastTransitionTime":"2025-12-12T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.917627 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.917702 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.917718 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.917738 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:28 crc kubenswrapper[4917]: I1212 00:06:28.917748 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:28Z","lastTransitionTime":"2025-12-12T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.021033 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.021083 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.021105 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.021127 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.021141 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:29Z","lastTransitionTime":"2025-12-12T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.124344 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.124387 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.124399 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.124418 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.124432 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:29Z","lastTransitionTime":"2025-12-12T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.226742 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.226791 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.226804 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.226822 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.226834 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:29Z","lastTransitionTime":"2025-12-12T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.329854 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.329909 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.329922 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.329943 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.329957 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:29Z","lastTransitionTime":"2025-12-12T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.433113 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.433159 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.433171 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.433189 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.433203 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:29Z","lastTransitionTime":"2025-12-12T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.536378 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.536438 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.536456 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.536480 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.536495 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:29Z","lastTransitionTime":"2025-12-12T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.601234 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.601302 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.601234 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:29 crc kubenswrapper[4917]: E1212 00:06:29.601408 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:29 crc kubenswrapper[4917]: E1212 00:06:29.601569 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:29 crc kubenswrapper[4917]: E1212 00:06:29.601682 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.639298 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.639333 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.639343 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.639362 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.639374 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:29Z","lastTransitionTime":"2025-12-12T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.741848 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.741890 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.741901 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.741916 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.741926 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:29Z","lastTransitionTime":"2025-12-12T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.845179 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.845231 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.845244 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.845265 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.845278 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:29Z","lastTransitionTime":"2025-12-12T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.854076 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-24mnq"] Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.854334 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hmhzk"] Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.854524 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hmhzk" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.854788 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ktvtt"] Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.855059 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.855386 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.856953 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.857070 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.858134 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.858177 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.858242 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.858276 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.858300 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.858307 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.858251 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.858333 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.858340 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.858411 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.858718 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.880781 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.897353 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.910704 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.923555 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.933926 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.946208 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.947888 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.947920 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.947930 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.947945 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.947954 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:29Z","lastTransitionTime":"2025-12-12T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.957935 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3a3ffe88-ff5c-41e9-9824-03044be1c979-hosts-file\") pod \"node-resolver-hmhzk\" (UID: \"3a3ffe88-ff5c-41e9-9824-03044be1c979\") " pod="openshift-dns/node-resolver-hmhzk" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.957976 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-run-netns\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.957999 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpnzj\" (UniqueName: \"kubernetes.io/projected/3a3ffe88-ff5c-41e9-9824-03044be1c979-kube-api-access-kpnzj\") pod \"node-resolver-hmhzk\" (UID: \"3a3ffe88-ff5c-41e9-9824-03044be1c979\") " pod="openshift-dns/node-resolver-hmhzk" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958023 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-cnibin\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958045 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-hostroot\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958070 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-conf-dir\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958203 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-run-multus-certs\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958270 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bddbc3a-d8cc-4766-80d3-92562e840be5-mcd-auth-proxy-config\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958303 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-var-lib-kubelet\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958331 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj5rw\" (UniqueName: \"kubernetes.io/projected/7ee00e08-bb29-427d-9de3-6b0616e409fe-kube-api-access-xj5rw\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958354 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-socket-dir-parent\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958375 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-run-k8s-cni-cncf-io\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958454 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8bddbc3a-d8cc-4766-80d3-92562e840be5-rootfs\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958483 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzhcc\" (UniqueName: \"kubernetes.io/projected/8bddbc3a-d8cc-4766-80d3-92562e840be5-kube-api-access-wzhcc\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958510 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-system-cni-dir\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958551 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-os-release\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958575 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-var-lib-cni-bin\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958597 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-daemon-config\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958663 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ee00e08-bb29-427d-9de3-6b0616e409fe-cni-binary-copy\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958732 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-etc-kubernetes\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958796 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bddbc3a-d8cc-4766-80d3-92562e840be5-proxy-tls\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958820 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-cni-dir\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.958838 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-var-lib-cni-multus\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.963876 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.974195 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.986317 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:29 crc kubenswrapper[4917]: I1212 00:06:29.997187 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.008157 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.024947 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.036031 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.046419 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.049810 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.049841 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.049851 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.049866 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.049876 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:30Z","lastTransitionTime":"2025-12-12T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.056473 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059361 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3a3ffe88-ff5c-41e9-9824-03044be1c979-hosts-file\") pod \"node-resolver-hmhzk\" (UID: \"3a3ffe88-ff5c-41e9-9824-03044be1c979\") " pod="openshift-dns/node-resolver-hmhzk" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059398 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-run-netns\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059424 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bddbc3a-d8cc-4766-80d3-92562e840be5-mcd-auth-proxy-config\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059448 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpnzj\" (UniqueName: \"kubernetes.io/projected/3a3ffe88-ff5c-41e9-9824-03044be1c979-kube-api-access-kpnzj\") pod \"node-resolver-hmhzk\" (UID: \"3a3ffe88-ff5c-41e9-9824-03044be1c979\") " pod="openshift-dns/node-resolver-hmhzk" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059471 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-cnibin\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059492 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-hostroot\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059512 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-conf-dir\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059533 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-run-multus-certs\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059557 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-var-lib-kubelet\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059577 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj5rw\" (UniqueName: \"kubernetes.io/projected/7ee00e08-bb29-427d-9de3-6b0616e409fe-kube-api-access-xj5rw\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059582 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3a3ffe88-ff5c-41e9-9824-03044be1c979-hosts-file\") pod \"node-resolver-hmhzk\" (UID: \"3a3ffe88-ff5c-41e9-9824-03044be1c979\") " pod="openshift-dns/node-resolver-hmhzk" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059599 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-socket-dir-parent\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059689 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-socket-dir-parent\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059705 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-run-k8s-cni-cncf-io\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059764 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8bddbc3a-d8cc-4766-80d3-92562e840be5-rootfs\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059969 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-cnibin\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.059984 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-run-netns\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060007 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-hostroot\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060024 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-run-multus-certs\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060046 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-var-lib-kubelet\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060068 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-run-k8s-cni-cncf-io\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060161 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8bddbc3a-d8cc-4766-80d3-92562e840be5-rootfs\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060218 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-conf-dir\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060258 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bddbc3a-d8cc-4766-80d3-92562e840be5-mcd-auth-proxy-config\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060377 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzhcc\" (UniqueName: \"kubernetes.io/projected/8bddbc3a-d8cc-4766-80d3-92562e840be5-kube-api-access-wzhcc\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060399 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-system-cni-dir\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060422 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-var-lib-cni-bin\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060438 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-daemon-config\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060452 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-os-release\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060486 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ee00e08-bb29-427d-9de3-6b0616e409fe-cni-binary-copy\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060502 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-etc-kubernetes\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060518 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bddbc3a-d8cc-4766-80d3-92562e840be5-proxy-tls\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060548 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-cni-dir\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060564 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-var-lib-cni-multus\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060614 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-var-lib-cni-multus\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060834 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-system-cni-dir\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060865 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-host-var-lib-cni-bin\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.060972 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-etc-kubernetes\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.061065 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-cni-dir\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.061068 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ee00e08-bb29-427d-9de3-6b0616e409fe-os-release\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.061591 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7ee00e08-bb29-427d-9de3-6b0616e409fe-multus-daemon-config\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.061782 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ee00e08-bb29-427d-9de3-6b0616e409fe-cni-binary-copy\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.065958 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bddbc3a-d8cc-4766-80d3-92562e840be5-proxy-tls\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.069314 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.074409 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpnzj\" (UniqueName: \"kubernetes.io/projected/3a3ffe88-ff5c-41e9-9824-03044be1c979-kube-api-access-kpnzj\") pod \"node-resolver-hmhzk\" (UID: \"3a3ffe88-ff5c-41e9-9824-03044be1c979\") " pod="openshift-dns/node-resolver-hmhzk" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.079968 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj5rw\" (UniqueName: \"kubernetes.io/projected/7ee00e08-bb29-427d-9de3-6b0616e409fe-kube-api-access-xj5rw\") pod \"multus-24mnq\" (UID: \"7ee00e08-bb29-427d-9de3-6b0616e409fe\") " pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.082772 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.085442 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzhcc\" (UniqueName: \"kubernetes.io/projected/8bddbc3a-d8cc-4766-80d3-92562e840be5-kube-api-access-wzhcc\") pod \"machine-config-daemon-ktvtt\" (UID: \"8bddbc3a-d8cc-4766-80d3-92562e840be5\") " pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.096013 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.110112 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.121431 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.152939 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.153216 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.153298 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.153374 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.153436 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:30Z","lastTransitionTime":"2025-12-12T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.166308 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hmhzk" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.175507 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-24mnq" Dec 12 00:06:30 crc kubenswrapper[4917]: W1212 00:06:30.178065 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a3ffe88_ff5c_41e9_9824_03044be1c979.slice/crio-f62363a04ce30c650803a5d7cc68a0884095d9aa39ef81c547b0dc99673b82a7 WatchSource:0}: Error finding container f62363a04ce30c650803a5d7cc68a0884095d9aa39ef81c547b0dc99673b82a7: Status 404 returned error can't find the container with id f62363a04ce30c650803a5d7cc68a0884095d9aa39ef81c547b0dc99673b82a7 Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.181836 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:06:30 crc kubenswrapper[4917]: W1212 00:06:30.192693 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee00e08_bb29_427d_9de3_6b0616e409fe.slice/crio-8fff62553011ae067cc91d75a816287d2ec9d4a7f55881e9a7df619310b1d928 WatchSource:0}: Error finding container 8fff62553011ae067cc91d75a816287d2ec9d4a7f55881e9a7df619310b1d928: Status 404 returned error can't find the container with id 8fff62553011ae067cc91d75a816287d2ec9d4a7f55881e9a7df619310b1d928 Dec 12 00:06:30 crc kubenswrapper[4917]: W1212 00:06:30.199307 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bddbc3a_d8cc_4766_80d3_92562e840be5.slice/crio-8aec9d2f6de98ff63535b8e5bb5258e1eb50df65259fa6e0b316b50cc1af8b98 WatchSource:0}: Error finding container 8aec9d2f6de98ff63535b8e5bb5258e1eb50df65259fa6e0b316b50cc1af8b98: Status 404 returned error can't find the container with id 8aec9d2f6de98ff63535b8e5bb5258e1eb50df65259fa6e0b316b50cc1af8b98 Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.231982 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-26hjd"] Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.232818 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.233999 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-qkh7m"] Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.234437 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.234933 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.235103 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.235725 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.235928 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.235986 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.236048 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.236160 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.236171 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.238240 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.246897 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.257851 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.257893 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.257907 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.257930 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.257943 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:30Z","lastTransitionTime":"2025-12-12T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.258955 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.270429 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.284558 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.295561 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.309918 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.321853 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.331914 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.348069 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.360550 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.360635 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.360695 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.360716 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.360733 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:30Z","lastTransitionTime":"2025-12-12T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.364170 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.365975 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-bin\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366020 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-netd\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366049 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366090 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-cnibin\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366253 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-openvswitch\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366345 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-ovn\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366442 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-systemd\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366478 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovn-node-metrics-cert\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366512 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/75be0c6b-6364-4d5a-9494-25cdbd35ce08-cni-binary-copy\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366539 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-var-lib-openvswitch\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366567 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-etc-openvswitch\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366600 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-script-lib\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366631 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/75be0c6b-6364-4d5a-9494-25cdbd35ce08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366698 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-netns\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366732 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-node-log\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366785 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-systemd-units\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366807 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-ovn-kubernetes\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366836 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-system-cni-dir\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366894 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-os-release\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366926 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-kubelet\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366953 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-log-socket\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.366976 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-slash\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.367003 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9gct\" (UniqueName: \"kubernetes.io/projected/c740630c-23cb-4c02-ab4e-bac3d773dce4-kube-api-access-k9gct\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.367031 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.367067 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-env-overrides\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.367095 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzck8\" (UniqueName: \"kubernetes.io/projected/75be0c6b-6364-4d5a-9494-25cdbd35ce08-kube-api-access-wzck8\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.367125 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-config\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.379317 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.402878 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.439115 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467554 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-bin\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467600 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-netd\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467622 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467650 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-cnibin\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467691 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-openvswitch\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467713 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-ovn\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467736 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovn-node-metrics-cert\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467771 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-systemd\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467795 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/75be0c6b-6364-4d5a-9494-25cdbd35ce08-cni-binary-copy\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467817 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-var-lib-openvswitch\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467838 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-etc-openvswitch\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467861 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-script-lib\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467881 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/75be0c6b-6364-4d5a-9494-25cdbd35ce08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467903 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-netns\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467929 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-node-log\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467950 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-ovn-kubernetes\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.467984 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-systemd-units\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.468005 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-system-cni-dir\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.468047 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-os-release\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.468072 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-kubelet\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.468093 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-log-socket\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.468112 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-slash\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.468132 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9gct\" (UniqueName: \"kubernetes.io/projected/c740630c-23cb-4c02-ab4e-bac3d773dce4-kube-api-access-k9gct\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.468152 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.468173 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-env-overrides\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.468198 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzck8\" (UniqueName: \"kubernetes.io/projected/75be0c6b-6364-4d5a-9494-25cdbd35ce08-kube-api-access-wzck8\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.468220 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-config\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469184 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-config\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469255 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-bin\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469288 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-netd\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469319 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469351 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-cnibin\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469387 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-openvswitch\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469417 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-ovn\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469880 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469908 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469919 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469936 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.469948 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:30Z","lastTransitionTime":"2025-12-12T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.470609 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.470725 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-systemd-units\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.470777 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-systemd\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.470808 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-slash\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.470850 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-system-cni-dir\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.470864 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.470907 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/75be0c6b-6364-4d5a-9494-25cdbd35ce08-os-release\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.470945 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-kubelet\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.470977 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-log-socket\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.471422 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-env-overrides\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.471468 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/75be0c6b-6364-4d5a-9494-25cdbd35ce08-cni-binary-copy\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.471508 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-var-lib-openvswitch\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.471510 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-node-log\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.471532 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-netns\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.471476 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-etc-openvswitch\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.471553 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-ovn-kubernetes\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.471790 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/75be0c6b-6364-4d5a-9494-25cdbd35ce08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.472110 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-script-lib\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.480494 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovn-node-metrics-cert\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.497168 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzck8\" (UniqueName: \"kubernetes.io/projected/75be0c6b-6364-4d5a-9494-25cdbd35ce08-kube-api-access-wzck8\") pod \"multus-additional-cni-plugins-qkh7m\" (UID: \"75be0c6b-6364-4d5a-9494-25cdbd35ce08\") " pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.497981 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9gct\" (UniqueName: \"kubernetes.io/projected/c740630c-23cb-4c02-ab4e-bac3d773dce4-kube-api-access-k9gct\") pod \"ovnkube-node-26hjd\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.502681 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.519153 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.531470 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.546126 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.559573 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.572263 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.572320 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.572334 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.572354 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.572367 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:30Z","lastTransitionTime":"2025-12-12T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.575958 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.588617 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.601227 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.612137 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.625702 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.639600 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.676166 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.676228 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.676243 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.676266 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.676280 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:30Z","lastTransitionTime":"2025-12-12T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.739264 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.739186 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:30 crc kubenswrapper[4917]: W1212 00:06:30.761820 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc740630c_23cb_4c02_ab4e_bac3d773dce4.slice/crio-9ee4e2737bcbde4bd936b5b422d0c41bba4e5fe97648ffd5b55c6b2a072c04a3 WatchSource:0}: Error finding container 9ee4e2737bcbde4bd936b5b422d0c41bba4e5fe97648ffd5b55c6b2a072c04a3: Status 404 returned error can't find the container with id 9ee4e2737bcbde4bd936b5b422d0c41bba4e5fe97648ffd5b55c6b2a072c04a3 Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.767925 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hmhzk" event={"ID":"3a3ffe88-ff5c-41e9-9824-03044be1c979","Type":"ContainerStarted","Data":"04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.767975 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hmhzk" event={"ID":"3a3ffe88-ff5c-41e9-9824-03044be1c979","Type":"ContainerStarted","Data":"f62363a04ce30c650803a5d7cc68a0884095d9aa39ef81c547b0dc99673b82a7"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.769623 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" event={"ID":"75be0c6b-6364-4d5a-9494-25cdbd35ce08","Type":"ContainerStarted","Data":"9f2ee339b0e2e88330afcc006114bda7e6bc9536bc6780ef5d51a81a63a79d66"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.771281 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24mnq" event={"ID":"7ee00e08-bb29-427d-9de3-6b0616e409fe","Type":"ContainerStarted","Data":"81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.771313 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24mnq" event={"ID":"7ee00e08-bb29-427d-9de3-6b0616e409fe","Type":"ContainerStarted","Data":"8fff62553011ae067cc91d75a816287d2ec9d4a7f55881e9a7df619310b1d928"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.773783 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.773814 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.773825 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"8aec9d2f6de98ff63535b8e5bb5258e1eb50df65259fa6e0b316b50cc1af8b98"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.779403 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.779429 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.779439 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.779451 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.779461 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:30Z","lastTransitionTime":"2025-12-12T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.782557 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.801880 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.817369 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.831144 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.843339 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.855950 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.871082 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.884429 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.885866 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.885911 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.885924 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.885942 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.885952 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:30Z","lastTransitionTime":"2025-12-12T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.900139 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.916694 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.931989 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.943976 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.963087 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.975782 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.988432 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.988476 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.988489 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.988504 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.988517 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:30Z","lastTransitionTime":"2025-12-12T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:30 crc kubenswrapper[4917]: I1212 00:06:30.990397 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.004797 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.018137 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.030903 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.044458 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.058390 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.087953 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.090735 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.090784 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.090797 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.090817 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.090830 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:31Z","lastTransitionTime":"2025-12-12T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.105077 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.133007 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.168610 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.193536 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.193573 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.193584 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.193601 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.193614 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:31Z","lastTransitionTime":"2025-12-12T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.211275 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.245764 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.276040 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.276208 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.276239 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.276278 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:06:39.276242518 +0000 UTC m=+34.054043331 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.276346 4917 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.276422 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:39.276403642 +0000 UTC m=+34.054204445 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.276423 4917 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.276538 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:39.276509144 +0000 UTC m=+34.054310117 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.296339 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.296411 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.296421 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.296437 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.296449 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:31Z","lastTransitionTime":"2025-12-12T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.377958 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.378423 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.378211 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.378508 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.378527 4917 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.378572 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.378590 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.378603 4917 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.378613 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:39.378586129 +0000 UTC m=+34.156387122 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.378683 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:39.37863976 +0000 UTC m=+34.156440573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.398862 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.398908 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.398922 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.398943 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.398957 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:31Z","lastTransitionTime":"2025-12-12T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.503409 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.503452 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.503464 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.503485 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.503495 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:31Z","lastTransitionTime":"2025-12-12T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.601089 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.601089 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.601247 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.601102 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.601390 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:31 crc kubenswrapper[4917]: E1212 00:06:31.601412 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.607455 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.607533 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.607546 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.607564 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.607576 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:31Z","lastTransitionTime":"2025-12-12T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.710580 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.710862 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.710966 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.711039 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.711102 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:31Z","lastTransitionTime":"2025-12-12T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.777357 4917 generic.go:334] "Generic (PLEG): container finished" podID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerID="958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51" exitCode=0 Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.777458 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.777697 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"9ee4e2737bcbde4bd936b5b422d0c41bba4e5fe97648ffd5b55c6b2a072c04a3"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.779753 4917 generic.go:334] "Generic (PLEG): container finished" podID="75be0c6b-6364-4d5a-9494-25cdbd35ce08" containerID="6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845" exitCode=0 Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.779832 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" event={"ID":"75be0c6b-6364-4d5a-9494-25cdbd35ce08","Type":"ContainerDied","Data":"6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.793083 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.808606 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.814017 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.814357 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.814369 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.814386 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.814396 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:31Z","lastTransitionTime":"2025-12-12T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.818994 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.835220 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.853282 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.879105 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.895767 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.908513 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.918005 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.918063 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.918076 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.918101 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.918115 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:31Z","lastTransitionTime":"2025-12-12T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.925104 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.944973 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.957794 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.969135 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:31 crc kubenswrapper[4917]: I1212 00:06:31.986025 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.005083 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.020612 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.020672 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.020688 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.020710 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.020721 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:32Z","lastTransitionTime":"2025-12-12T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.023240 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.038307 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.053700 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.073544 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.089320 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.104751 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.122152 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.124672 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.124857 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.125127 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.126924 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.126967 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:32Z","lastTransitionTime":"2025-12-12T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.136744 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.165019 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.205595 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.229636 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.229729 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.229741 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.229781 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.229792 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:32Z","lastTransitionTime":"2025-12-12T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.246741 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.286525 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.332581 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.332624 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.332635 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.332675 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.332691 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:32Z","lastTransitionTime":"2025-12-12T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.436716 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.436776 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.436789 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.436812 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.436824 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:32Z","lastTransitionTime":"2025-12-12T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.448192 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-5tpmh"] Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.448682 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5tpmh" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.453564 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.454387 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.454676 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.455476 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.470870 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.508239 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.527491 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.539432 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.539501 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.539513 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.539544 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.539559 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:32Z","lastTransitionTime":"2025-12-12T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.545282 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.565387 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.588850 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df038132-e4e9-47cf-a5e4-384eff3548db-host\") pod \"node-ca-5tpmh\" (UID: \"df038132-e4e9-47cf-a5e4-384eff3548db\") " pod="openshift-image-registry/node-ca-5tpmh" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.589053 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcv4g\" (UniqueName: \"kubernetes.io/projected/df038132-e4e9-47cf-a5e4-384eff3548db-kube-api-access-vcv4g\") pod \"node-ca-5tpmh\" (UID: \"df038132-e4e9-47cf-a5e4-384eff3548db\") " pod="openshift-image-registry/node-ca-5tpmh" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.589169 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/df038132-e4e9-47cf-a5e4-384eff3548db-serviceca\") pod \"node-ca-5tpmh\" (UID: \"df038132-e4e9-47cf-a5e4-384eff3548db\") " pod="openshift-image-registry/node-ca-5tpmh" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.604269 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.642189 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.642817 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.642829 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.642848 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.642860 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:32Z","lastTransitionTime":"2025-12-12T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.647508 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.687910 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.690377 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df038132-e4e9-47cf-a5e4-384eff3548db-host\") pod \"node-ca-5tpmh\" (UID: \"df038132-e4e9-47cf-a5e4-384eff3548db\") " pod="openshift-image-registry/node-ca-5tpmh" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.690447 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcv4g\" (UniqueName: \"kubernetes.io/projected/df038132-e4e9-47cf-a5e4-384eff3548db-kube-api-access-vcv4g\") pod \"node-ca-5tpmh\" (UID: \"df038132-e4e9-47cf-a5e4-384eff3548db\") " pod="openshift-image-registry/node-ca-5tpmh" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.690491 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/df038132-e4e9-47cf-a5e4-384eff3548db-serviceca\") pod \"node-ca-5tpmh\" (UID: \"df038132-e4e9-47cf-a5e4-384eff3548db\") " pod="openshift-image-registry/node-ca-5tpmh" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.690521 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df038132-e4e9-47cf-a5e4-384eff3548db-host\") pod \"node-ca-5tpmh\" (UID: \"df038132-e4e9-47cf-a5e4-384eff3548db\") " pod="openshift-image-registry/node-ca-5tpmh" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.691926 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/df038132-e4e9-47cf-a5e4-384eff3548db-serviceca\") pod \"node-ca-5tpmh\" (UID: \"df038132-e4e9-47cf-a5e4-384eff3548db\") " pod="openshift-image-registry/node-ca-5tpmh" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.737173 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcv4g\" (UniqueName: \"kubernetes.io/projected/df038132-e4e9-47cf-a5e4-384eff3548db-kube-api-access-vcv4g\") pod \"node-ca-5tpmh\" (UID: \"df038132-e4e9-47cf-a5e4-384eff3548db\") " pod="openshift-image-registry/node-ca-5tpmh" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.746199 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.746262 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.746277 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.746296 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.746307 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:32Z","lastTransitionTime":"2025-12-12T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.747482 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.762347 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5tpmh" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.784788 4917 generic.go:334] "Generic (PLEG): container finished" podID="75be0c6b-6364-4d5a-9494-25cdbd35ce08" containerID="653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8" exitCode=0 Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.784923 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" event={"ID":"75be0c6b-6364-4d5a-9494-25cdbd35ce08","Type":"ContainerDied","Data":"653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.790592 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.790675 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.790688 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.790698 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.790707 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.790745 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.792315 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5tpmh" event={"ID":"df038132-e4e9-47cf-a5e4-384eff3548db","Type":"ContainerStarted","Data":"06a1c79da77ad6d58ee4de8c2145ae80576a2a0f4cec32637c273e4b59a18922"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.793113 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.829136 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.851579 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.851625 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.851685 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.851706 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.851720 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:32Z","lastTransitionTime":"2025-12-12T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.869936 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.907265 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.955544 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.956051 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.956092 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.956106 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.956125 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.956137 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:32Z","lastTransitionTime":"2025-12-12T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:32 crc kubenswrapper[4917]: I1212 00:06:32.988140 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.028169 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.059037 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.059074 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.059085 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.059100 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.059110 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:33Z","lastTransitionTime":"2025-12-12T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.069470 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.108917 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.148902 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.162215 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.162256 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.162271 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.162292 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.162305 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:33Z","lastTransitionTime":"2025-12-12T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.184601 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.234289 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.266066 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.266111 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.266122 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.266140 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.266152 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:33Z","lastTransitionTime":"2025-12-12T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.270747 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.309984 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.347098 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.368951 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.369024 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.369038 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.369060 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.369085 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:33Z","lastTransitionTime":"2025-12-12T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.393815 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.427127 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.467277 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.472604 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.472670 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.472692 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.472714 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.472731 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:33Z","lastTransitionTime":"2025-12-12T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.509871 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.575548 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.575624 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.575663 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.575687 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.575706 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:33Z","lastTransitionTime":"2025-12-12T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.601217 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.601250 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.601317 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:33 crc kubenswrapper[4917]: E1212 00:06:33.601401 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:33 crc kubenswrapper[4917]: E1212 00:06:33.601516 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:33 crc kubenswrapper[4917]: E1212 00:06:33.601702 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.678786 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.678835 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.678848 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.678866 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.678879 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:33Z","lastTransitionTime":"2025-12-12T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.781841 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.781875 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.781888 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.781906 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.781919 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:33Z","lastTransitionTime":"2025-12-12T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.796923 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5tpmh" event={"ID":"df038132-e4e9-47cf-a5e4-384eff3548db","Type":"ContainerStarted","Data":"aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a"} Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.799878 4917 generic.go:334] "Generic (PLEG): container finished" podID="75be0c6b-6364-4d5a-9494-25cdbd35ce08" containerID="193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60" exitCode=0 Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.799925 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" event={"ID":"75be0c6b-6364-4d5a-9494-25cdbd35ce08","Type":"ContainerDied","Data":"193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60"} Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.816271 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.832401 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.853278 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.867085 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.881519 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.884479 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.884518 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.884531 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.884548 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.884559 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:33Z","lastTransitionTime":"2025-12-12T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.894946 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.912195 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.925450 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.939210 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.957339 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.968557 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.986384 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.987066 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.987097 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.987111 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.987128 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:33 crc kubenswrapper[4917]: I1212 00:06:33.987139 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:33Z","lastTransitionTime":"2025-12-12T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.028298 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.067171 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.089823 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.089864 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.089874 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.089891 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.089901 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:34Z","lastTransitionTime":"2025-12-12T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.108257 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.146701 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.188054 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.194049 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.194098 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.194112 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.194133 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.194146 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:34Z","lastTransitionTime":"2025-12-12T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.227938 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.264138 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.297212 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.297247 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.297257 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.297273 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.297283 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:34Z","lastTransitionTime":"2025-12-12T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.308691 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.346962 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.385824 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.399878 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.399924 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.399937 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.399954 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.399968 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:34Z","lastTransitionTime":"2025-12-12T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.427547 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.464214 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.502925 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.502976 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.502987 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.503003 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.503014 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:34Z","lastTransitionTime":"2025-12-12T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.513008 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.548765 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.589184 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.605872 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.605926 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.605938 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.605960 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.605973 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:34Z","lastTransitionTime":"2025-12-12T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.627761 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.708803 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.708854 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.708870 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.708889 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.708903 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:34Z","lastTransitionTime":"2025-12-12T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.808284 4917 generic.go:334] "Generic (PLEG): container finished" podID="75be0c6b-6364-4d5a-9494-25cdbd35ce08" containerID="9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2" exitCode=0 Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.808413 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" event={"ID":"75be0c6b-6364-4d5a-9494-25cdbd35ce08","Type":"ContainerDied","Data":"9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.812842 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.812937 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.812966 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.813004 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.813029 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:34Z","lastTransitionTime":"2025-12-12T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.819062 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.828105 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.841815 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.857724 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.872882 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.884258 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.895951 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.911077 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.915416 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.915471 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.915541 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.915569 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.915581 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:34Z","lastTransitionTime":"2025-12-12T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.946789 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:34 crc kubenswrapper[4917]: I1212 00:06:34.984760 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:34Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.019111 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.019153 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.019165 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.019184 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.019195 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:35Z","lastTransitionTime":"2025-12-12T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.031726 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.067231 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.109041 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.122202 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.122248 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.122262 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.122284 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.122297 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:35Z","lastTransitionTime":"2025-12-12T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.148882 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.187020 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.225134 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.225182 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.225194 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.225215 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.225227 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:35Z","lastTransitionTime":"2025-12-12T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.328018 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.328092 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.328111 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.328139 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.328159 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:35Z","lastTransitionTime":"2025-12-12T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.433226 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.433286 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.433300 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.433323 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.433335 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:35Z","lastTransitionTime":"2025-12-12T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.536984 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.537044 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.537058 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.537086 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.537268 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:35Z","lastTransitionTime":"2025-12-12T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.601307 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:35 crc kubenswrapper[4917]: E1212 00:06:35.601457 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.601636 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.601736 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:35 crc kubenswrapper[4917]: E1212 00:06:35.601845 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:35 crc kubenswrapper[4917]: E1212 00:06:35.601915 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.630107 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.640445 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.640503 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.640519 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.640538 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.640553 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:35Z","lastTransitionTime":"2025-12-12T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.646207 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.662877 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.684053 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.696606 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.710758 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.724225 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.738042 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.742794 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.742850 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.742880 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.742901 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.742914 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:35Z","lastTransitionTime":"2025-12-12T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.758429 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.773007 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.786686 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.803106 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.818419 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.826792 4917 generic.go:334] "Generic (PLEG): container finished" podID="75be0c6b-6364-4d5a-9494-25cdbd35ce08" containerID="a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6" exitCode=0 Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.826884 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" event={"ID":"75be0c6b-6364-4d5a-9494-25cdbd35ce08","Type":"ContainerDied","Data":"a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6"} Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.833916 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.848886 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.849350 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.849513 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.849612 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.849798 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:35Z","lastTransitionTime":"2025-12-12T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.849678 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.866968 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.878429 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.906036 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.949022 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.952149 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.952188 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.952198 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.952214 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.952224 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:35Z","lastTransitionTime":"2025-12-12T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:35 crc kubenswrapper[4917]: I1212 00:06:35.984235 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.030194 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.054017 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.054066 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.054078 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.054096 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.054108 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.066817 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.106001 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.147239 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.157099 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.157137 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.157203 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.157247 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.157261 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.184149 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.226248 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.259291 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.259326 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.259336 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.259351 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.259361 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.269986 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.308083 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.362625 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.362687 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.362700 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.362719 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.362733 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.465608 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.465666 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.465679 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.465700 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.465712 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.568470 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.568518 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.568528 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.568545 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.568554 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.621408 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.621456 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.621467 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.621484 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.621495 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: E1212 00:06:36.639052 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.643538 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.643591 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.643603 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.643621 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.643643 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: E1212 00:06:36.660636 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.665029 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.665082 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.665097 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.665412 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.665448 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: E1212 00:06:36.678743 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.682475 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.682515 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.682525 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.682544 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.682555 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: E1212 00:06:36.694725 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.697892 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.697936 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.697947 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.697965 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.697975 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: E1212 00:06:36.710802 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: E1212 00:06:36.710925 4917 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.712444 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.712479 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.712489 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.712507 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.712517 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.815995 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.816073 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.816094 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.816124 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.816143 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.836987 4917 generic.go:334] "Generic (PLEG): container finished" podID="75be0c6b-6364-4d5a-9494-25cdbd35ce08" containerID="af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2" exitCode=0 Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.837065 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" event={"ID":"75be0c6b-6364-4d5a-9494-25cdbd35ce08","Type":"ContainerDied","Data":"af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2"} Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.864720 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.881882 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.902916 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.916471 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.919195 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.919220 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.919256 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.919279 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.919293 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:36Z","lastTransitionTime":"2025-12-12T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.933019 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.948893 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.965723 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.982438 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:36 crc kubenswrapper[4917]: I1212 00:06:36.997285 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.008950 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.022756 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.022795 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.022807 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.022825 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.022840 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:37Z","lastTransitionTime":"2025-12-12T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.024821 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.043246 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.057070 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.076324 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.126839 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.126907 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.126923 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.126944 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.126956 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:37Z","lastTransitionTime":"2025-12-12T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.229853 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.229895 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.229905 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.229920 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.229931 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:37Z","lastTransitionTime":"2025-12-12T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.332637 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.332722 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.332736 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.332753 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.332765 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:37Z","lastTransitionTime":"2025-12-12T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.435974 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.436012 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.436023 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.436038 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.436048 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:37Z","lastTransitionTime":"2025-12-12T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.539612 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.539708 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.539725 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.539746 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.539757 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:37Z","lastTransitionTime":"2025-12-12T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.601698 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.601764 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.601715 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:37 crc kubenswrapper[4917]: E1212 00:06:37.601889 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:37 crc kubenswrapper[4917]: E1212 00:06:37.602019 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:37 crc kubenswrapper[4917]: E1212 00:06:37.602108 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.643364 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.643424 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.643435 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.643455 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.643466 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:37Z","lastTransitionTime":"2025-12-12T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.746025 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.746065 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.746077 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.746096 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.746108 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:37Z","lastTransitionTime":"2025-12-12T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.837432 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.847839 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.847877 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.847886 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.847935 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.847947 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:37Z","lastTransitionTime":"2025-12-12T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.851597 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" event={"ID":"75be0c6b-6364-4d5a-9494-25cdbd35ce08","Type":"ContainerStarted","Data":"f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.857570 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.858263 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.858374 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.859695 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.875536 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.881721 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.881798 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.894975 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.909980 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.925816 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.941092 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.951152 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.951198 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.951210 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.951229 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.951239 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:37Z","lastTransitionTime":"2025-12-12T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.954593 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.967219 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.980200 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:37 crc kubenswrapper[4917]: I1212 00:06:37.995439 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.012505 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.031917 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.043704 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.054873 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.054920 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.054934 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.054952 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.054965 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:38Z","lastTransitionTime":"2025-12-12T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.060069 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.076062 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.091988 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.107247 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.121856 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.137960 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.152629 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.156778 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.156843 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.156860 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.156883 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.156896 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:38Z","lastTransitionTime":"2025-12-12T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.168131 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.182072 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.201730 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.217186 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.231297 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.245933 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.259331 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.259389 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.259402 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.259422 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.259433 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:38Z","lastTransitionTime":"2025-12-12T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.261301 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.273081 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.362483 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.362528 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.362539 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.362557 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.362570 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:38Z","lastTransitionTime":"2025-12-12T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.465472 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.465537 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.465557 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.465584 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.465597 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:38Z","lastTransitionTime":"2025-12-12T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.568801 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.568854 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.568867 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.568897 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.568911 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:38Z","lastTransitionTime":"2025-12-12T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.671852 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.671903 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.671919 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.671939 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.671950 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:38Z","lastTransitionTime":"2025-12-12T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.774551 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.774599 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.774612 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.774634 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.774662 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:38Z","lastTransitionTime":"2025-12-12T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.861543 4917 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.877778 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.877820 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.877831 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.877846 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.877856 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:38Z","lastTransitionTime":"2025-12-12T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.980926 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.981006 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.981022 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.981042 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:38 crc kubenswrapper[4917]: I1212 00:06:38.981057 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:38Z","lastTransitionTime":"2025-12-12T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.083704 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.083759 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.083774 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.083793 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.083805 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:39Z","lastTransitionTime":"2025-12-12T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.186788 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.186846 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.186859 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.186878 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.186890 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:39Z","lastTransitionTime":"2025-12-12T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.290004 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.290049 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.290061 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.290078 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.290090 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:39Z","lastTransitionTime":"2025-12-12T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.361818 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.361980 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:06:55.361952763 +0000 UTC m=+50.139753576 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.362069 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.362132 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.362264 4917 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.362285 4917 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.362334 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:55.362324263 +0000 UTC m=+50.140125076 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.362349 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:55.362342264 +0000 UTC m=+50.140143077 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.392717 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.392765 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.392775 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.392792 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.392803 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:39Z","lastTransitionTime":"2025-12-12T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.463385 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.463471 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.463570 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.463602 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.463616 4917 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.463574 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.463683 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:55.463665998 +0000 UTC m=+50.241466811 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.463687 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.463700 4917 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.463744 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 00:06:55.46372627 +0000 UTC m=+50.241527093 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.496577 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.496674 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.496694 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.496722 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.496739 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:39Z","lastTransitionTime":"2025-12-12T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.599706 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.599750 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.599765 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.599783 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.599795 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:39Z","lastTransitionTime":"2025-12-12T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.601210 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.601210 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.601333 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.601436 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.601878 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:39 crc kubenswrapper[4917]: E1212 00:06:39.601986 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.702529 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.702594 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.702610 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.702635 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.702679 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:39Z","lastTransitionTime":"2025-12-12T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.806255 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.806332 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.806357 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.806392 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.806416 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:39Z","lastTransitionTime":"2025-12-12T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.865021 4917 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.909817 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.909869 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.909882 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.909901 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:39 crc kubenswrapper[4917]: I1212 00:06:39.909912 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:39Z","lastTransitionTime":"2025-12-12T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.012336 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.012393 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.012406 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.012426 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.012439 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:40Z","lastTransitionTime":"2025-12-12T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.115728 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.115782 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.115794 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.115815 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.115837 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:40Z","lastTransitionTime":"2025-12-12T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.220677 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.220732 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.220745 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.220763 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.220775 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:40Z","lastTransitionTime":"2025-12-12T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.323559 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.323616 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.323636 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.323842 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.323862 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:40Z","lastTransitionTime":"2025-12-12T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.427013 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.427082 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.427099 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.427119 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.427131 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:40Z","lastTransitionTime":"2025-12-12T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.530053 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.530126 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.530152 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.530184 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.530210 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:40Z","lastTransitionTime":"2025-12-12T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.632883 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.632937 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.632950 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.632978 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.632990 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:40Z","lastTransitionTime":"2025-12-12T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.735281 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.735344 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.735361 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.735384 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.735397 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:40Z","lastTransitionTime":"2025-12-12T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.839224 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.839271 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.839279 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.839294 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.839306 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:40Z","lastTransitionTime":"2025-12-12T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.942763 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.942812 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.942832 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.942852 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:40 crc kubenswrapper[4917]: I1212 00:06:40.942865 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:40Z","lastTransitionTime":"2025-12-12T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.046523 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.046587 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.046600 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.046620 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.046633 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:41Z","lastTransitionTime":"2025-12-12T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.149716 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.149774 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.149789 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.149811 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.149834 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:41Z","lastTransitionTime":"2025-12-12T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.255144 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.255189 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.255200 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.255216 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.255227 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:41Z","lastTransitionTime":"2025-12-12T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.357830 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.357884 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.357894 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.357911 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.357920 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:41Z","lastTransitionTime":"2025-12-12T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.460694 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.460747 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.460761 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.460781 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.460807 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:41Z","lastTransitionTime":"2025-12-12T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.563987 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.564059 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.564072 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.564097 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.564111 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:41Z","lastTransitionTime":"2025-12-12T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.601858 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.601854 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:41 crc kubenswrapper[4917]: E1212 00:06:41.602056 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.601884 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:41 crc kubenswrapper[4917]: E1212 00:06:41.602251 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:41 crc kubenswrapper[4917]: E1212 00:06:41.602489 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.666787 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.666837 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.666850 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.666875 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.666895 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:41Z","lastTransitionTime":"2025-12-12T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.769560 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.769631 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.769702 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.769738 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.769760 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:41Z","lastTransitionTime":"2025-12-12T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.872319 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.872364 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.872376 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.872394 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.872406 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:41Z","lastTransitionTime":"2025-12-12T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.873746 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/0.log" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.876094 4917 generic.go:334] "Generic (PLEG): container finished" podID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerID="a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca" exitCode=1 Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.876131 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.876819 4917 scope.go:117] "RemoveContainer" containerID="a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.896289 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:41Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI1212 00:06:39.035317 6222 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:39.035352 6222 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035392 6222 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035420 6222 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035447 6222 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035485 6222 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035509 6222 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.037103 6222 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:39.035542 6222 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.912259 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.927604 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.942108 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.960490 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.975054 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.975106 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.975118 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.975134 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.975144 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:41Z","lastTransitionTime":"2025-12-12T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.976280 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:41 crc kubenswrapper[4917]: I1212 00:06:41.989179 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.005619 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.021125 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.024078 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.034825 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.046851 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.060527 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.077332 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.078140 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.078170 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.078181 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.078206 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.078221 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:42Z","lastTransitionTime":"2025-12-12T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.091262 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.181509 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.181585 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.181605 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.181632 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.181728 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:42Z","lastTransitionTime":"2025-12-12T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.283929 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.283987 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.284002 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.284028 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.284045 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:42Z","lastTransitionTime":"2025-12-12T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.386882 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.386931 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.386947 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.386968 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.386986 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:42Z","lastTransitionTime":"2025-12-12T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.478527 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp"] Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.479116 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.481979 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.482072 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.489518 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.489577 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.489601 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.489631 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.489686 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:42Z","lastTransitionTime":"2025-12-12T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.501859 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:41Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI1212 00:06:39.035317 6222 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:39.035352 6222 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035392 6222 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035420 6222 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035447 6222 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035485 6222 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035509 6222 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.037103 6222 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:39.035542 6222 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.518831 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.533395 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.547945 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.560393 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.572049 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.584437 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.598448 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.598499 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.598513 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.598534 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.598548 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:42Z","lastTransitionTime":"2025-12-12T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.599021 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.601553 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/92a97d2a-f733-4608-819e-a5c10747433b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.601607 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/92a97d2a-f733-4608-819e-a5c10747433b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.602957 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2zzd\" (UniqueName: \"kubernetes.io/projected/92a97d2a-f733-4608-819e-a5c10747433b-kube-api-access-t2zzd\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.603105 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/92a97d2a-f733-4608-819e-a5c10747433b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.614565 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.639213 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.651047 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.661365 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.679714 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.697848 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.701897 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.701934 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.701948 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.701968 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.701982 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:42Z","lastTransitionTime":"2025-12-12T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.704075 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/92a97d2a-f733-4608-819e-a5c10747433b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.704149 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/92a97d2a-f733-4608-819e-a5c10747433b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.704215 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2zzd\" (UniqueName: \"kubernetes.io/projected/92a97d2a-f733-4608-819e-a5c10747433b-kube-api-access-t2zzd\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.704258 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/92a97d2a-f733-4608-819e-a5c10747433b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.704901 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/92a97d2a-f733-4608-819e-a5c10747433b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.705067 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/92a97d2a-f733-4608-819e-a5c10747433b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.709332 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/92a97d2a-f733-4608-819e-a5c10747433b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.710023 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.725239 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2zzd\" (UniqueName: \"kubernetes.io/projected/92a97d2a-f733-4608-819e-a5c10747433b-kube-api-access-t2zzd\") pod \"ovnkube-control-plane-749d76644c-wm9sp\" (UID: \"92a97d2a-f733-4608-819e-a5c10747433b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.792179 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.804508 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.804540 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.804549 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.804564 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.804574 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:42Z","lastTransitionTime":"2025-12-12T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:42 crc kubenswrapper[4917]: W1212 00:06:42.807101 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92a97d2a_f733_4608_819e_a5c10747433b.slice/crio-c6aa0604ecbebace09b259a23e14ae5298465b0878a8e933d0bfb5f4152ce800 WatchSource:0}: Error finding container c6aa0604ecbebace09b259a23e14ae5298465b0878a8e933d0bfb5f4152ce800: Status 404 returned error can't find the container with id c6aa0604ecbebace09b259a23e14ae5298465b0878a8e933d0bfb5f4152ce800 Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.881034 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" event={"ID":"92a97d2a-f733-4608-819e-a5c10747433b","Type":"ContainerStarted","Data":"c6aa0604ecbebace09b259a23e14ae5298465b0878a8e933d0bfb5f4152ce800"} Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.909197 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.909251 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.909263 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.909281 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:42 crc kubenswrapper[4917]: I1212 00:06:42.909294 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:42Z","lastTransitionTime":"2025-12-12T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.012270 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.012307 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.012317 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.012332 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.012342 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:43Z","lastTransitionTime":"2025-12-12T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.114989 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.115019 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.115028 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.115054 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.115064 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:43Z","lastTransitionTime":"2025-12-12T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.217494 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.217534 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.217542 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.217556 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.217564 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:43Z","lastTransitionTime":"2025-12-12T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.320687 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.320740 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.320754 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.320769 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.320781 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:43Z","lastTransitionTime":"2025-12-12T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.423978 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.424033 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.424065 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.424099 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.424120 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:43Z","lastTransitionTime":"2025-12-12T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.527208 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.527307 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.527334 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.527367 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.527389 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:43Z","lastTransitionTime":"2025-12-12T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.600937 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.600973 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:43 crc kubenswrapper[4917]: E1212 00:06:43.601086 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.601130 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:43 crc kubenswrapper[4917]: E1212 00:06:43.601267 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:43 crc kubenswrapper[4917]: E1212 00:06:43.601353 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.629690 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.629758 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.629780 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.629811 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.629833 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:43Z","lastTransitionTime":"2025-12-12T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.733047 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.733093 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.733105 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.733122 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.733135 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:43Z","lastTransitionTime":"2025-12-12T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.835828 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.835880 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.835898 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.835922 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.835941 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:43Z","lastTransitionTime":"2025-12-12T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.938751 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.938827 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.938850 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.938878 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.938903 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:43Z","lastTransitionTime":"2025-12-12T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.969708 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-f4t96"] Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.970531 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:43 crc kubenswrapper[4917]: E1212 00:06:43.970690 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:06:43 crc kubenswrapper[4917]: I1212 00:06:43.989173 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.007362 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.023292 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.038264 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.042142 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.042216 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.042238 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.042265 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.042291 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:44Z","lastTransitionTime":"2025-12-12T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.054333 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.066906 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.082790 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.099462 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.114930 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.118029 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4ftb\" (UniqueName: \"kubernetes.io/projected/58f4853b-9736-4a03-8c86-1627cb51acbe-kube-api-access-k4ftb\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.118125 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.126116 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.145241 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.145331 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.145352 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.145379 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.145396 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:44Z","lastTransitionTime":"2025-12-12T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.148529 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:41Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI1212 00:06:39.035317 6222 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:39.035352 6222 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035392 6222 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035420 6222 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035447 6222 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035485 6222 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035509 6222 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.037103 6222 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:39.035542 6222 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.170940 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.184320 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.197215 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.213292 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.218723 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.218811 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4ftb\" (UniqueName: \"kubernetes.io/projected/58f4853b-9736-4a03-8c86-1627cb51acbe-kube-api-access-k4ftb\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:44 crc kubenswrapper[4917]: E1212 00:06:44.218894 4917 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:44 crc kubenswrapper[4917]: E1212 00:06:44.218989 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs podName:58f4853b-9736-4a03-8c86-1627cb51acbe nodeName:}" failed. No retries permitted until 2025-12-12 00:06:44.718967531 +0000 UTC m=+39.496768524 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs") pod "network-metrics-daemon-f4t96" (UID: "58f4853b-9736-4a03-8c86-1627cb51acbe") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.235414 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.244741 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4ftb\" (UniqueName: \"kubernetes.io/projected/58f4853b-9736-4a03-8c86-1627cb51acbe-kube-api-access-k4ftb\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.250373 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.250415 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.250426 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.250441 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.250465 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:44Z","lastTransitionTime":"2025-12-12T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.352766 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.352818 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.352832 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.352848 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.352860 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:44Z","lastTransitionTime":"2025-12-12T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.455074 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.455130 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.455141 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.455158 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.455169 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:44Z","lastTransitionTime":"2025-12-12T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.558315 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.558373 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.558383 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.558399 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.558430 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:44Z","lastTransitionTime":"2025-12-12T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.661213 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.661268 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.661277 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.661290 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.661298 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:44Z","lastTransitionTime":"2025-12-12T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.724025 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:44 crc kubenswrapper[4917]: E1212 00:06:44.724210 4917 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:44 crc kubenswrapper[4917]: E1212 00:06:44.724279 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs podName:58f4853b-9736-4a03-8c86-1627cb51acbe nodeName:}" failed. No retries permitted until 2025-12-12 00:06:45.724261425 +0000 UTC m=+40.502062238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs") pod "network-metrics-daemon-f4t96" (UID: "58f4853b-9736-4a03-8c86-1627cb51acbe") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.764156 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.764196 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.764222 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.764238 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.764249 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:44Z","lastTransitionTime":"2025-12-12T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.866462 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.866505 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.866514 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.866529 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.866540 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:44Z","lastTransitionTime":"2025-12-12T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.890250 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" event={"ID":"92a97d2a-f733-4608-819e-a5c10747433b","Type":"ContainerStarted","Data":"9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.890332 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" event={"ID":"92a97d2a-f733-4608-819e-a5c10747433b","Type":"ContainerStarted","Data":"029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.892731 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/0.log" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.895322 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.895777 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.910044 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.924053 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.936302 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.954524 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.968498 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.968556 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.968566 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.968587 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.968600 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:44Z","lastTransitionTime":"2025-12-12T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.973119 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.984536 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:44 crc kubenswrapper[4917]: I1212 00:06:44.996938 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.008515 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.025874 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:41Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI1212 00:06:39.035317 6222 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:39.035352 6222 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035392 6222 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035420 6222 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035447 6222 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035485 6222 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035509 6222 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.037103 6222 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:39.035542 6222 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.055351 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.071298 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.071340 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.071350 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.071576 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.071601 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:45Z","lastTransitionTime":"2025-12-12T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.087875 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.108093 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.122906 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.138710 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.150486 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.162458 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.174269 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.174300 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.174311 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.174325 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.174335 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:45Z","lastTransitionTime":"2025-12-12T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.276828 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.276889 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.276902 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.276925 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.276940 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:45Z","lastTransitionTime":"2025-12-12T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.379862 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.379927 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.379946 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.379969 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.379986 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:45Z","lastTransitionTime":"2025-12-12T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.483590 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.483721 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.483746 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.483907 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.483958 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:45Z","lastTransitionTime":"2025-12-12T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.587281 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.587340 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.587357 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.587393 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.587431 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:45Z","lastTransitionTime":"2025-12-12T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.601864 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.601888 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.601870 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:45 crc kubenswrapper[4917]: E1212 00:06:45.602623 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.602659 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:45 crc kubenswrapper[4917]: E1212 00:06:45.602725 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:45 crc kubenswrapper[4917]: E1212 00:06:45.602837 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:45 crc kubenswrapper[4917]: E1212 00:06:45.602993 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.618879 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.635499 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.649695 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.662983 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.674827 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.686938 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.688936 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.688996 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.689013 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.689031 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.689044 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:45Z","lastTransitionTime":"2025-12-12T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.701456 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.716123 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.732353 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.732994 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:45 crc kubenswrapper[4917]: E1212 00:06:45.733219 4917 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:45 crc kubenswrapper[4917]: E1212 00:06:45.733354 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs podName:58f4853b-9736-4a03-8c86-1627cb51acbe nodeName:}" failed. No retries permitted until 2025-12-12 00:06:47.733326475 +0000 UTC m=+42.511127288 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs") pod "network-metrics-daemon-f4t96" (UID: "58f4853b-9736-4a03-8c86-1627cb51acbe") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.745387 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.759580 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.778501 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:41Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI1212 00:06:39.035317 6222 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:39.035352 6222 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035392 6222 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035420 6222 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035447 6222 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035485 6222 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035509 6222 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.037103 6222 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:39.035542 6222 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.791776 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.791824 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.791837 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.791858 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.791873 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:45Z","lastTransitionTime":"2025-12-12T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.795402 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.810233 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.826851 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.842069 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.894910 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.894970 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.894981 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.894998 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.895013 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:45Z","lastTransitionTime":"2025-12-12T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.898777 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/1.log" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.899395 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/0.log" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.902427 4917 generic.go:334] "Generic (PLEG): container finished" podID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerID="10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6" exitCode=1 Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.904324 4917 scope.go:117] "RemoveContainer" containerID="10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.904477 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6"} Dec 12 00:06:45 crc kubenswrapper[4917]: E1212 00:06:45.904545 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-26hjd_openshift-ovn-kubernetes(c740630c-23cb-4c02-ab4e-bac3d773dce4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.904858 4917 scope.go:117] "RemoveContainer" containerID="a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.917070 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.927875 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.939101 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.949567 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.960163 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.971785 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.980583 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.992348 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.997355 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.997470 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.997483 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.997503 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:45 crc kubenswrapper[4917]: I1212 00:06:45.997515 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:45Z","lastTransitionTime":"2025-12-12T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.010101 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.022173 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.038372 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.051845 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.071014 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:41Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI1212 00:06:39.035317 6222 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:39.035352 6222 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035392 6222 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035420 6222 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035447 6222 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035485 6222 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035509 6222 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.037103 6222 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:39.035542 6222 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.083593 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.097303 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.099909 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.099964 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.099982 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.100000 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.100013 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.111216 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.126443 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.136752 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.153264 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a093f2f1a02ee558983a4b021a407100e94b582007eb29276cccb020c61990ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:41Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI1212 00:06:39.035317 6222 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:39.035352 6222 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035392 6222 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035420 6222 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:39.035447 6222 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035485 6222 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.035509 6222 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:39.037103 6222 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:39.035542 6222 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.164335 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.174855 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.185915 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.197187 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.202509 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.202983 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.203004 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.203018 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.203046 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.212478 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.227097 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.239940 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.252064 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.267846 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.278720 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.291803 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.303485 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.304901 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.304937 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.304950 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.304965 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.304976 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.314305 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.408723 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.408802 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.408820 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.408862 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.408878 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.512342 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.512381 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.512390 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.512407 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.512417 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.616473 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.616553 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.616570 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.616600 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.616632 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.719939 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.720003 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.720024 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.720050 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.720067 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.823953 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.824021 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.824033 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.824057 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.824074 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.870490 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.870548 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.870565 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.870590 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.870607 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: E1212 00:06:46.893970 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.900219 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.900283 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.900295 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.900321 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.900334 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.911298 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/1.log" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.917450 4917 scope.go:117] "RemoveContainer" containerID="10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6" Dec 12 00:06:46 crc kubenswrapper[4917]: E1212 00:06:46.917857 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-26hjd_openshift-ovn-kubernetes(c740630c-23cb-4c02-ab4e-bac3d773dce4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" Dec 12 00:06:46 crc kubenswrapper[4917]: E1212 00:06:46.919883 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.925561 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.925683 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.925722 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.925755 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.925778 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.952143 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: E1212 00:06:46.957516 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.961708 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.961763 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.961780 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.961801 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.961814 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.971297 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: E1212 00:06:46.975261 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.979472 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.979606 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.979619 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.979655 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.979670 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.986986 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: E1212 00:06:46.991897 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:46 crc kubenswrapper[4917]: E1212 00:06:46.992058 4917 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.993667 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.993717 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.993726 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.993745 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:46 crc kubenswrapper[4917]: I1212 00:06:46.993758 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:46Z","lastTransitionTime":"2025-12-12T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.004816 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.022204 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.033840 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.045134 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.057433 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.075930 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-26hjd_openshift-ovn-kubernetes(c740630c-23cb-4c02-ab4e-bac3d773dce4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.091583 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.095568 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.095604 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.095616 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.095634 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.095698 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:47Z","lastTransitionTime":"2025-12-12T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.111774 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.125517 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.136319 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.149366 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.162077 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.175948 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.202316 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.202383 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.202394 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.202421 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.202437 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:47Z","lastTransitionTime":"2025-12-12T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.304129 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.304169 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.304177 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.304189 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.304197 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:47Z","lastTransitionTime":"2025-12-12T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.406241 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.406352 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.406366 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.406386 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.406401 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:47Z","lastTransitionTime":"2025-12-12T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.509185 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.509249 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.509261 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.509280 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.509292 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:47Z","lastTransitionTime":"2025-12-12T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.601055 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.601107 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:47 crc kubenswrapper[4917]: E1212 00:06:47.601223 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.601339 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.601481 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:47 crc kubenswrapper[4917]: E1212 00:06:47.601628 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:47 crc kubenswrapper[4917]: E1212 00:06:47.601710 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:47 crc kubenswrapper[4917]: E1212 00:06:47.601792 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.611828 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.611872 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.611883 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.611899 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.611915 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:47Z","lastTransitionTime":"2025-12-12T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.715197 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.715248 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.715261 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.715278 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.715289 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:47Z","lastTransitionTime":"2025-12-12T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.755277 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:47 crc kubenswrapper[4917]: E1212 00:06:47.755527 4917 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:47 crc kubenswrapper[4917]: E1212 00:06:47.755629 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs podName:58f4853b-9736-4a03-8c86-1627cb51acbe nodeName:}" failed. No retries permitted until 2025-12-12 00:06:51.755603449 +0000 UTC m=+46.533404452 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs") pod "network-metrics-daemon-f4t96" (UID: "58f4853b-9736-4a03-8c86-1627cb51acbe") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.819487 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.819533 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.819543 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.819559 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.819569 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:47Z","lastTransitionTime":"2025-12-12T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.921606 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.921712 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.921743 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.921775 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:47 crc kubenswrapper[4917]: I1212 00:06:47.921797 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:47Z","lastTransitionTime":"2025-12-12T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.024168 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.024204 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.024214 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.024227 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.024238 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:48Z","lastTransitionTime":"2025-12-12T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.126894 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.126929 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.126938 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.126952 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.126963 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:48Z","lastTransitionTime":"2025-12-12T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.228927 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.229026 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.229050 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.229080 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.229094 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:48Z","lastTransitionTime":"2025-12-12T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.331571 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.331637 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.331669 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.331688 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.331700 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:48Z","lastTransitionTime":"2025-12-12T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.434735 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.434775 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.434784 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.434798 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.434807 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:48Z","lastTransitionTime":"2025-12-12T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.537017 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.537066 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.537075 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.537090 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.537100 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:48Z","lastTransitionTime":"2025-12-12T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.639623 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.639683 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.639692 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.639714 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.639724 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:48Z","lastTransitionTime":"2025-12-12T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.742093 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.742129 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.742156 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.742171 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.742180 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:48Z","lastTransitionTime":"2025-12-12T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.845348 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.845414 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.845427 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.845444 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.845458 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:48Z","lastTransitionTime":"2025-12-12T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.947445 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.947496 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.947513 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.947531 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:48 crc kubenswrapper[4917]: I1212 00:06:48.947543 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:48Z","lastTransitionTime":"2025-12-12T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.049675 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.049737 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.049749 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.049762 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.049771 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:49Z","lastTransitionTime":"2025-12-12T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.152154 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.152205 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.152216 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.152231 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.152241 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:49Z","lastTransitionTime":"2025-12-12T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.255082 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.255127 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.255137 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.255153 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.255163 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:49Z","lastTransitionTime":"2025-12-12T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.358843 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.358964 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.358988 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.359017 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.359039 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:49Z","lastTransitionTime":"2025-12-12T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.462225 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.462264 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.462275 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.462293 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.462305 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:49Z","lastTransitionTime":"2025-12-12T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.565483 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.565530 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.565539 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.565555 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.565565 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:49Z","lastTransitionTime":"2025-12-12T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.601857 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.601912 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.601998 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:49 crc kubenswrapper[4917]: E1212 00:06:49.602124 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:06:49 crc kubenswrapper[4917]: E1212 00:06:49.602254 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.602337 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:49 crc kubenswrapper[4917]: E1212 00:06:49.602402 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:49 crc kubenswrapper[4917]: E1212 00:06:49.602748 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.668006 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.668049 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.668063 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.668086 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.668100 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:49Z","lastTransitionTime":"2025-12-12T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.783109 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.783155 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.783167 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.783185 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.783196 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:49Z","lastTransitionTime":"2025-12-12T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.886148 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.886194 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.886209 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.886229 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.886246 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:49Z","lastTransitionTime":"2025-12-12T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.988414 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.988445 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.988477 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.988492 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:49 crc kubenswrapper[4917]: I1212 00:06:49.988503 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:49Z","lastTransitionTime":"2025-12-12T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.092236 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.092297 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.092311 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.092332 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.092345 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:50Z","lastTransitionTime":"2025-12-12T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.195069 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.195139 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.195163 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.195191 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.195205 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:50Z","lastTransitionTime":"2025-12-12T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.298170 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.298224 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.298236 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.298253 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.298264 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:50Z","lastTransitionTime":"2025-12-12T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.401494 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.401564 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.401578 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.401603 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.401618 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:50Z","lastTransitionTime":"2025-12-12T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.504586 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.504670 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.504682 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.504705 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.504719 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:50Z","lastTransitionTime":"2025-12-12T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.607593 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.607634 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.607663 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.607680 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.607689 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:50Z","lastTransitionTime":"2025-12-12T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.711958 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.712010 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.712018 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.712039 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.712054 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:50Z","lastTransitionTime":"2025-12-12T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.814574 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.814610 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.814619 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.814633 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.814654 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:50Z","lastTransitionTime":"2025-12-12T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.917196 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.917266 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.917282 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.917298 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:50 crc kubenswrapper[4917]: I1212 00:06:50.917315 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:50Z","lastTransitionTime":"2025-12-12T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.019572 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.019617 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.019633 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.019670 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.019683 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:51Z","lastTransitionTime":"2025-12-12T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.127414 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.127461 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.127472 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.127489 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.127500 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:51Z","lastTransitionTime":"2025-12-12T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.230691 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.230753 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.230766 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.230783 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.230794 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:51Z","lastTransitionTime":"2025-12-12T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.333415 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.333449 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.333459 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.333474 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.333484 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:51Z","lastTransitionTime":"2025-12-12T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.435514 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.435558 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.435572 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.435587 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.435597 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:51Z","lastTransitionTime":"2025-12-12T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.537667 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.537724 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.537734 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.537754 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.537809 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:51Z","lastTransitionTime":"2025-12-12T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.601593 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.601711 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:51 crc kubenswrapper[4917]: E1212 00:06:51.601750 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.601775 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:51 crc kubenswrapper[4917]: E1212 00:06:51.601986 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.602020 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:51 crc kubenswrapper[4917]: E1212 00:06:51.602100 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:51 crc kubenswrapper[4917]: E1212 00:06:51.602153 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.640144 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.640419 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.640540 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.640926 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.641487 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:51Z","lastTransitionTime":"2025-12-12T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.745175 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.745228 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.745248 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.745268 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.745282 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:51Z","lastTransitionTime":"2025-12-12T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.802982 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:51 crc kubenswrapper[4917]: E1212 00:06:51.803239 4917 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:51 crc kubenswrapper[4917]: E1212 00:06:51.803605 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs podName:58f4853b-9736-4a03-8c86-1627cb51acbe nodeName:}" failed. No retries permitted until 2025-12-12 00:06:59.803576173 +0000 UTC m=+54.581376986 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs") pod "network-metrics-daemon-f4t96" (UID: "58f4853b-9736-4a03-8c86-1627cb51acbe") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.848007 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.848443 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.848537 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.848678 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.848785 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:51Z","lastTransitionTime":"2025-12-12T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.952266 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.952334 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.952348 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.952370 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:51 crc kubenswrapper[4917]: I1212 00:06:51.952382 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:51Z","lastTransitionTime":"2025-12-12T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.055822 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.056576 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.056695 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.056787 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.056864 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:52Z","lastTransitionTime":"2025-12-12T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.159740 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.159803 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.159817 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.159840 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.159855 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:52Z","lastTransitionTime":"2025-12-12T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.262415 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.262729 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.262818 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.262898 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.262966 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:52Z","lastTransitionTime":"2025-12-12T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.365961 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.366232 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.366343 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.366415 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.366484 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:52Z","lastTransitionTime":"2025-12-12T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.469578 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.470048 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.470149 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.470243 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.470348 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:52Z","lastTransitionTime":"2025-12-12T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.573109 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.573158 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.573168 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.573181 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.573190 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:52Z","lastTransitionTime":"2025-12-12T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.674964 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.675312 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.675431 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.675535 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.675600 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:52Z","lastTransitionTime":"2025-12-12T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.778270 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.778338 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.778352 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.778377 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.778392 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:52Z","lastTransitionTime":"2025-12-12T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.880831 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.880891 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.880911 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.880939 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.880960 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:52Z","lastTransitionTime":"2025-12-12T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.984476 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.984964 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.985038 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.985108 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:52 crc kubenswrapper[4917]: I1212 00:06:52.985168 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:52Z","lastTransitionTime":"2025-12-12T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.088406 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.088476 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.088491 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.088530 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.088546 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:53Z","lastTransitionTime":"2025-12-12T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.190928 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.190972 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.191004 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.191019 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.191031 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:53Z","lastTransitionTime":"2025-12-12T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.293952 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.294007 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.294017 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.294032 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.294044 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:53Z","lastTransitionTime":"2025-12-12T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.396441 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.396863 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.396998 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.397107 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.397177 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:53Z","lastTransitionTime":"2025-12-12T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.500139 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.500213 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.500224 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.500239 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.500248 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:53Z","lastTransitionTime":"2025-12-12T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.601731 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.602135 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:53 crc kubenswrapper[4917]: E1212 00:06:53.602288 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:53 crc kubenswrapper[4917]: E1212 00:06:53.602354 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.602421 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.602534 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:53 crc kubenswrapper[4917]: E1212 00:06:53.603007 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:53 crc kubenswrapper[4917]: E1212 00:06:53.603116 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.604071 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.604138 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.604150 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.604173 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.604187 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:53Z","lastTransitionTime":"2025-12-12T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.707083 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.707139 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.707154 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.707172 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.707184 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:53Z","lastTransitionTime":"2025-12-12T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.810984 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.811479 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.811572 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.811730 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.811814 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:53Z","lastTransitionTime":"2025-12-12T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.914974 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.915028 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.915042 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.915061 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:53 crc kubenswrapper[4917]: I1212 00:06:53.915075 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:53Z","lastTransitionTime":"2025-12-12T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.018207 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.018249 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.018257 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.018272 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.018281 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:54Z","lastTransitionTime":"2025-12-12T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.120139 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.120637 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.120759 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.120859 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.120994 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:54Z","lastTransitionTime":"2025-12-12T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.223610 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.223682 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.223697 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.223715 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.223729 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:54Z","lastTransitionTime":"2025-12-12T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.326706 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.327072 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.327332 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.327692 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.327885 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:54Z","lastTransitionTime":"2025-12-12T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.431091 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.431130 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.431140 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.431156 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.431165 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:54Z","lastTransitionTime":"2025-12-12T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.533758 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.533912 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.533933 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.533956 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.533971 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:54Z","lastTransitionTime":"2025-12-12T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.636598 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.636680 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.636695 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.636715 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.636731 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:54Z","lastTransitionTime":"2025-12-12T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.739516 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.739555 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.739564 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.739577 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.739590 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:54Z","lastTransitionTime":"2025-12-12T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.841721 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.841759 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.841768 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.841781 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.841789 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:54Z","lastTransitionTime":"2025-12-12T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.944511 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.944583 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.944595 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.944616 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:54 crc kubenswrapper[4917]: I1212 00:06:54.944628 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:54Z","lastTransitionTime":"2025-12-12T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.047946 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.047980 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.047988 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.048004 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.048014 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:55Z","lastTransitionTime":"2025-12-12T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.150278 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.150322 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.150331 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.150347 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.150355 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:55Z","lastTransitionTime":"2025-12-12T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.252937 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.253204 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.253290 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.253414 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.253495 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:55Z","lastTransitionTime":"2025-12-12T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.356118 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.356166 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.356178 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.356195 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.356207 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:55Z","lastTransitionTime":"2025-12-12T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.441717 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.441856 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.441894 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.442020 4917 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.442077 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:07:27.442058709 +0000 UTC m=+82.219859522 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.442202 4917 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.442301 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:07:27.442279185 +0000 UTC m=+82.220080018 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.442343 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:07:27.442328796 +0000 UTC m=+82.220129629 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.458765 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.458839 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.458857 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.458883 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.458897 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:55Z","lastTransitionTime":"2025-12-12T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.543481 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.543590 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.543753 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.543800 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.543820 4917 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.543823 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.543852 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.543869 4917 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.543898 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 00:07:27.543877667 +0000 UTC m=+82.321678490 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.543940 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 00:07:27.543918758 +0000 UTC m=+82.321719751 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.561066 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.561132 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.561150 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.561174 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.561187 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:55Z","lastTransitionTime":"2025-12-12T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.601544 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.601705 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.602072 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.602120 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.602364 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.602150 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.602504 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:55 crc kubenswrapper[4917]: E1212 00:06:55.602711 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.618734 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.633399 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.648990 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.662506 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.664149 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.664294 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.664380 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.664470 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.664549 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:55Z","lastTransitionTime":"2025-12-12T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.676228 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.695053 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.708434 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.722349 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.739513 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.750543 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.763345 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.766972 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.767041 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.767055 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.767078 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.767099 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:55Z","lastTransitionTime":"2025-12-12T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.778208 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.789527 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.804172 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.818882 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.835857 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-26hjd_openshift-ovn-kubernetes(c740630c-23cb-4c02-ab4e-bac3d773dce4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.869358 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.869422 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.869435 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.869457 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.869470 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:55Z","lastTransitionTime":"2025-12-12T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.972444 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.972518 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.972537 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.972569 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:55 crc kubenswrapper[4917]: I1212 00:06:55.972593 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:55Z","lastTransitionTime":"2025-12-12T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.075207 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.075253 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.075262 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.075276 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.075284 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:56Z","lastTransitionTime":"2025-12-12T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.177792 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.177839 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.177849 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.177866 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.177877 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:56Z","lastTransitionTime":"2025-12-12T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.281001 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.281059 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.281074 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.281093 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.281105 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:56Z","lastTransitionTime":"2025-12-12T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.384223 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.384269 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.384280 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.384295 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.384305 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:56Z","lastTransitionTime":"2025-12-12T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.491614 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.491688 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.491703 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.491721 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.491741 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:56Z","lastTransitionTime":"2025-12-12T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.594723 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.594775 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.594788 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.594805 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.594818 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:56Z","lastTransitionTime":"2025-12-12T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.696725 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.696974 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.697071 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.697158 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.697249 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:56Z","lastTransitionTime":"2025-12-12T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.800355 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.800446 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.800456 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.800473 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.800484 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:56Z","lastTransitionTime":"2025-12-12T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.903281 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.903606 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.903752 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.903861 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:56 crc kubenswrapper[4917]: I1212 00:06:56.903960 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:56Z","lastTransitionTime":"2025-12-12T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.007030 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.007419 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.007506 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.007600 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.007719 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.111244 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.111631 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.111792 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.111877 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.111945 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.215495 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.215529 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.215537 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.215549 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.215557 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.317635 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.317733 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.317745 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.317769 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.317782 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.323086 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.323153 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.323173 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.323203 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.323221 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: E1212 00:06:57.337814 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.342581 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.342699 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.342716 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.342738 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.342750 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: E1212 00:06:57.356068 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.361773 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.361961 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.362035 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.362138 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.362214 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: E1212 00:06:57.376133 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.381592 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.381628 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.381657 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.381680 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.381693 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: E1212 00:06:57.394446 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.399584 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.399694 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.399714 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.399739 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.399756 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: E1212 00:06:57.413773 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:57 crc kubenswrapper[4917]: E1212 00:06:57.414138 4917 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.420854 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.421053 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.421083 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.421149 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.421178 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.523837 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.523921 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.523935 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.523957 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.523993 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.603314 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:57 crc kubenswrapper[4917]: E1212 00:06:57.603432 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.603635 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:57 crc kubenswrapper[4917]: E1212 00:06:57.603714 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.604035 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:57 crc kubenswrapper[4917]: E1212 00:06:57.604095 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.604140 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:57 crc kubenswrapper[4917]: E1212 00:06:57.604187 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.626778 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.626834 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.626846 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.626869 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.626883 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.730379 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.730458 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.730483 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.730517 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.730552 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.834951 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.834998 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.835010 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.835032 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.835049 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.938161 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.938235 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.938250 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.938280 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:57 crc kubenswrapper[4917]: I1212 00:06:57.938296 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:57Z","lastTransitionTime":"2025-12-12T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.041470 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.041564 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.041580 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.041630 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.041674 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:58Z","lastTransitionTime":"2025-12-12T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.144890 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.144966 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.144985 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.145010 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.145021 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:58Z","lastTransitionTime":"2025-12-12T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.247418 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.247473 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.247488 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.247508 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.247525 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:58Z","lastTransitionTime":"2025-12-12T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.350229 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.350285 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.350296 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.350311 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.350323 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:58Z","lastTransitionTime":"2025-12-12T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.453526 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.453590 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.453602 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.453626 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.453659 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:58Z","lastTransitionTime":"2025-12-12T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.557354 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.557432 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.557447 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.557503 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.557519 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:58Z","lastTransitionTime":"2025-12-12T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.660618 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.660724 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.660745 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.660822 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.660844 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:58Z","lastTransitionTime":"2025-12-12T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.763019 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.763064 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.763073 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.763087 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.763100 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:58Z","lastTransitionTime":"2025-12-12T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.866723 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.866771 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.866779 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.866795 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.866806 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:58Z","lastTransitionTime":"2025-12-12T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.969697 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.969759 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.969769 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.969787 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:58 crc kubenswrapper[4917]: I1212 00:06:58.969798 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:58Z","lastTransitionTime":"2025-12-12T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.072777 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.072838 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.072850 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.072878 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.072893 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:59Z","lastTransitionTime":"2025-12-12T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.176036 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.176102 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.176116 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.176134 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.176148 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:59Z","lastTransitionTime":"2025-12-12T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.279400 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.279481 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.279506 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.279538 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.279564 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:59Z","lastTransitionTime":"2025-12-12T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.382335 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.382387 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.382398 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.382420 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.382432 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:59Z","lastTransitionTime":"2025-12-12T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.485317 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.485358 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.485367 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.485381 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.485391 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:59Z","lastTransitionTime":"2025-12-12T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.588270 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.588310 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.588319 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.588335 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.588343 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:59Z","lastTransitionTime":"2025-12-12T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.601694 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.601796 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.601839 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:59 crc kubenswrapper[4917]: E1212 00:06:59.601918 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.601949 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:06:59 crc kubenswrapper[4917]: E1212 00:06:59.602044 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:06:59 crc kubenswrapper[4917]: E1212 00:06:59.602075 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:06:59 crc kubenswrapper[4917]: E1212 00:06:59.602201 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.690928 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.691000 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.691011 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.691031 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.691044 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:59Z","lastTransitionTime":"2025-12-12T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.726826 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.736326 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.743928 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.757768 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.775979 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.794804 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.795232 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.795253 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.795395 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.795414 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.795425 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:59Z","lastTransitionTime":"2025-12-12T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.809477 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.820964 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.836419 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.848854 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.866781 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.884715 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.889832 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:06:59 crc kubenswrapper[4917]: E1212 00:06:59.890035 4917 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:59 crc kubenswrapper[4917]: E1212 00:06:59.890190 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs podName:58f4853b-9736-4a03-8c86-1627cb51acbe nodeName:}" failed. No retries permitted until 2025-12-12 00:07:15.890122185 +0000 UTC m=+70.667923038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs") pod "network-metrics-daemon-f4t96" (UID: "58f4853b-9736-4a03-8c86-1627cb51acbe") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.898189 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.898739 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.898770 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.898782 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.898798 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.898811 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:06:59Z","lastTransitionTime":"2025-12-12T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.912980 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.925500 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.942624 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.954508 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:06:59 crc kubenswrapper[4917]: I1212 00:06:59.980005 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-26hjd_openshift-ovn-kubernetes(c740630c-23cb-4c02-ab4e-bac3d773dce4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.001846 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.001920 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.001933 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.002067 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.002089 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:00Z","lastTransitionTime":"2025-12-12T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.105323 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.105543 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.105571 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.105733 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.105834 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:00Z","lastTransitionTime":"2025-12-12T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.209750 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.209838 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.209871 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.209897 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.209913 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:00Z","lastTransitionTime":"2025-12-12T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.312867 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.312917 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.312929 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.312949 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.312966 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:00Z","lastTransitionTime":"2025-12-12T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.415945 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.415987 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.415996 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.416009 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.416019 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:00Z","lastTransitionTime":"2025-12-12T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.518851 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.518914 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.518936 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.518961 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.518974 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:00Z","lastTransitionTime":"2025-12-12T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.622594 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.622634 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.622667 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.622686 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.622701 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:00Z","lastTransitionTime":"2025-12-12T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.725625 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.725915 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.725926 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.725946 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.725961 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:00Z","lastTransitionTime":"2025-12-12T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.828890 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.828959 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.828971 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.828996 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.829010 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:00Z","lastTransitionTime":"2025-12-12T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.932591 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.932723 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.932753 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.932788 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:00 crc kubenswrapper[4917]: I1212 00:07:00.932811 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:00Z","lastTransitionTime":"2025-12-12T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.036262 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.036345 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.036367 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.036398 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.036414 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:01Z","lastTransitionTime":"2025-12-12T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.139064 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.139108 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.139117 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.139134 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.139144 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:01Z","lastTransitionTime":"2025-12-12T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.242736 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.242792 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.242801 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.242822 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.242840 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:01Z","lastTransitionTime":"2025-12-12T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.345576 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.345622 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.345675 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.345699 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.345746 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:01Z","lastTransitionTime":"2025-12-12T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.453907 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.453967 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.453984 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.454004 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.454015 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:01Z","lastTransitionTime":"2025-12-12T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.560012 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.560066 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.560077 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.560100 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.560114 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:01Z","lastTransitionTime":"2025-12-12T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.602318 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.602461 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.602607 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.602693 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:01 crc kubenswrapper[4917]: E1212 00:07:01.602827 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:01 crc kubenswrapper[4917]: E1212 00:07:01.603029 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:01 crc kubenswrapper[4917]: E1212 00:07:01.603079 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:01 crc kubenswrapper[4917]: E1212 00:07:01.603134 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.603134 4917 scope.go:117] "RemoveContainer" containerID="10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.663220 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.663268 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.663280 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.663303 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.663313 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:01Z","lastTransitionTime":"2025-12-12T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.767795 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.767856 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.767868 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.767891 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.767905 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:01Z","lastTransitionTime":"2025-12-12T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.870819 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.871270 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.871285 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.871308 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.871324 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:01Z","lastTransitionTime":"2025-12-12T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.974546 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.974584 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.974594 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.974613 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:01 crc kubenswrapper[4917]: I1212 00:07:01.974624 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:01Z","lastTransitionTime":"2025-12-12T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.004581 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/1.log" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.007096 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15"} Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.007597 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.027965 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.048821 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.077368 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.077405 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.077414 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.077430 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.077441 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:02Z","lastTransitionTime":"2025-12-12T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.080069 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.103930 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.122917 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.143546 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.163020 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.180396 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.180453 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.180469 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.180492 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.180509 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:02Z","lastTransitionTime":"2025-12-12T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.181574 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.197986 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.214776 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd3b1e30-806e-4e60-8457-5a8dc8255b49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db3014d28fbd4ad947b0bb07b8e2cdd07a9a42923f12a89dafbb482228861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf369a1cb2eaf2ca16299cd6a8e314ae2693ede79e120eaae657dcfc7c1629c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ac02655901d9d729b6cf2cccf17ed4104f1d1f568d813c68920686068db586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.232443 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.247622 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.261821 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.277288 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.283621 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.283694 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.283707 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.283730 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.283746 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:02Z","lastTransitionTime":"2025-12-12T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.294905 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.314586 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.327225 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.387155 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.387230 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.387247 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.387273 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.387288 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:02Z","lastTransitionTime":"2025-12-12T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.490379 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.490699 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.490758 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.490786 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.490799 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:02Z","lastTransitionTime":"2025-12-12T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.594385 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.594445 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.594461 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.594480 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.594493 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:02Z","lastTransitionTime":"2025-12-12T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.697388 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.697434 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.697444 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.697462 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.697475 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:02Z","lastTransitionTime":"2025-12-12T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.801279 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.801335 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.801345 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.801367 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.801379 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:02Z","lastTransitionTime":"2025-12-12T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.904428 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.904486 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.904501 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.904525 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:02 crc kubenswrapper[4917]: I1212 00:07:02.904544 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:02Z","lastTransitionTime":"2025-12-12T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.007372 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.007426 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.007437 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.007454 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.007466 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:03Z","lastTransitionTime":"2025-12-12T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.109825 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.109886 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.109900 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.109923 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.109941 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:03Z","lastTransitionTime":"2025-12-12T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.213730 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.213781 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.213795 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.213816 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.213831 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:03Z","lastTransitionTime":"2025-12-12T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.317241 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.317295 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.317304 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.317319 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.317329 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:03Z","lastTransitionTime":"2025-12-12T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.420232 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.420292 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.420303 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.420325 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.420338 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:03Z","lastTransitionTime":"2025-12-12T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.522483 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.522536 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.522546 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.522567 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.522582 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:03Z","lastTransitionTime":"2025-12-12T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.601293 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.601468 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:03 crc kubenswrapper[4917]: E1212 00:07:03.601611 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.601684 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.601738 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:03 crc kubenswrapper[4917]: E1212 00:07:03.601847 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:03 crc kubenswrapper[4917]: E1212 00:07:03.601973 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:03 crc kubenswrapper[4917]: E1212 00:07:03.602227 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.625484 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.625761 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.625773 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.625793 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.625810 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:03Z","lastTransitionTime":"2025-12-12T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.728258 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.728323 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.728341 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.728367 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.728386 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:03Z","lastTransitionTime":"2025-12-12T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.831132 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.831181 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.831192 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.831209 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.831221 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:03Z","lastTransitionTime":"2025-12-12T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.933778 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.933851 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.933875 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.933903 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:03 crc kubenswrapper[4917]: I1212 00:07:03.933922 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:03Z","lastTransitionTime":"2025-12-12T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.036849 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.036910 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.036924 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.036943 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.036962 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:04Z","lastTransitionTime":"2025-12-12T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.139723 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.139787 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.139801 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.139823 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.139836 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:04Z","lastTransitionTime":"2025-12-12T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.242256 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.242318 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.242329 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.242350 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.242363 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:04Z","lastTransitionTime":"2025-12-12T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.345265 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.345316 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.345333 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.345351 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.345362 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:04Z","lastTransitionTime":"2025-12-12T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.448212 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.448272 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.448281 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.448299 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.448317 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:04Z","lastTransitionTime":"2025-12-12T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.556280 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.556342 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.556356 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.556376 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.556387 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:04Z","lastTransitionTime":"2025-12-12T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.660290 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.660353 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.660365 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.660380 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.660393 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:04Z","lastTransitionTime":"2025-12-12T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.763798 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.763859 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.763870 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.763889 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.763903 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:04Z","lastTransitionTime":"2025-12-12T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.867691 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.867730 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.867739 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.867755 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.867766 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:04Z","lastTransitionTime":"2025-12-12T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.971024 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.971071 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.971081 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.971105 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:04 crc kubenswrapper[4917]: I1212 00:07:04.971116 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:04Z","lastTransitionTime":"2025-12-12T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.075172 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.075268 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.075284 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.075308 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.075320 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:05Z","lastTransitionTime":"2025-12-12T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.178590 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.178664 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.178675 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.178695 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.178712 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:05Z","lastTransitionTime":"2025-12-12T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.282767 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.282826 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.282841 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.282862 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.282878 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:05Z","lastTransitionTime":"2025-12-12T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.386005 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.386058 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.386069 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.386093 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.386107 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:05Z","lastTransitionTime":"2025-12-12T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.489408 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.489487 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.489505 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.489531 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.489549 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:05Z","lastTransitionTime":"2025-12-12T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.593886 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.593941 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.593953 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.593974 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.593987 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:05Z","lastTransitionTime":"2025-12-12T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.600916 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.600965 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:05 crc kubenswrapper[4917]: E1212 00:07:05.601351 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.601087 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.601060 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:05 crc kubenswrapper[4917]: E1212 00:07:05.601722 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:05 crc kubenswrapper[4917]: E1212 00:07:05.601628 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:05 crc kubenswrapper[4917]: E1212 00:07:05.601538 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.622677 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.637068 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.659004 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.673241 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.688346 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.696317 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.696351 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.696362 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.696379 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.696391 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:05Z","lastTransitionTime":"2025-12-12T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.711446 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.728195 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.745711 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.761842 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.773942 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.789092 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.799260 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.799295 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.799304 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.799322 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.799336 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:05Z","lastTransitionTime":"2025-12-12T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.806489 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.818796 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.830173 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd3b1e30-806e-4e60-8457-5a8dc8255b49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db3014d28fbd4ad947b0bb07b8e2cdd07a9a42923f12a89dafbb482228861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf369a1cb2eaf2ca16299cd6a8e314ae2693ede79e120eaae657dcfc7c1629c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ac02655901d9d729b6cf2cccf17ed4104f1d1f568d813c68920686068db586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.844955 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.860783 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.874427 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.902209 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.902267 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.902278 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.902304 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:05 crc kubenswrapper[4917]: I1212 00:07:05.902322 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:05Z","lastTransitionTime":"2025-12-12T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.004445 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.004502 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.004565 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.004589 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.004603 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:06Z","lastTransitionTime":"2025-12-12T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.107446 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.107494 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.107510 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.107535 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.107550 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:06Z","lastTransitionTime":"2025-12-12T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.210773 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.210831 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.210845 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.210866 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.210880 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:06Z","lastTransitionTime":"2025-12-12T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.314551 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.314598 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.314611 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.314634 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.314862 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:06Z","lastTransitionTime":"2025-12-12T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.417684 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.417721 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.417733 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.417751 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.417763 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:06Z","lastTransitionTime":"2025-12-12T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.521127 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.521210 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.521235 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.521263 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.521280 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:06Z","lastTransitionTime":"2025-12-12T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.625522 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.625610 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.625623 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.625688 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.625704 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:06Z","lastTransitionTime":"2025-12-12T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.728727 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.728783 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.728794 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.728810 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.728827 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:06Z","lastTransitionTime":"2025-12-12T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.832167 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.832242 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.832257 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.832279 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.832294 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:06Z","lastTransitionTime":"2025-12-12T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.935660 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.935789 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.935803 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.935819 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:06 crc kubenswrapper[4917]: I1212 00:07:06.935852 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:06Z","lastTransitionTime":"2025-12-12T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.038932 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.038984 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.038996 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.039017 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.039039 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.142074 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.142129 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.142142 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.142163 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.142176 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.244496 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.244554 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.244571 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.244597 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.244613 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.348434 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.348486 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.348500 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.348520 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.348535 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.451047 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.451090 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.451099 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.451116 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.451128 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.517306 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.517357 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.517367 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.517387 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.517399 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: E1212 00:07:07.534502 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.539672 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.539752 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.539771 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.539800 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.539819 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: E1212 00:07:07.554538 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.558870 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.558926 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.558943 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.558973 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.558993 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: E1212 00:07:07.575325 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.578975 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.579012 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.579054 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.579073 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.579086 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: E1212 00:07:07.591400 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.595802 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.595867 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.595882 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.595901 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.595915 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.601799 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.601820 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.601888 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.601977 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:07 crc kubenswrapper[4917]: E1212 00:07:07.602104 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:07 crc kubenswrapper[4917]: E1212 00:07:07.602204 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:07 crc kubenswrapper[4917]: E1212 00:07:07.602333 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:07 crc kubenswrapper[4917]: E1212 00:07:07.602411 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:07 crc kubenswrapper[4917]: E1212 00:07:07.609052 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:07 crc kubenswrapper[4917]: E1212 00:07:07.609210 4917 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.610544 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.610581 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.610594 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.610608 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.610621 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.713605 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.713768 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.713791 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.713809 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.713820 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.817149 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.817223 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.817241 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.817265 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.817281 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.919837 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.919882 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.919898 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.919913 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:07 crc kubenswrapper[4917]: I1212 00:07:07.919925 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:07Z","lastTransitionTime":"2025-12-12T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.025434 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.025511 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.025525 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.025551 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.025572 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:08Z","lastTransitionTime":"2025-12-12T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.129037 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.129082 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.129091 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.129117 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.129129 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:08Z","lastTransitionTime":"2025-12-12T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.232058 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.232119 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.232129 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.232151 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.232163 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:08Z","lastTransitionTime":"2025-12-12T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.335014 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.335082 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.335097 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.335122 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.335142 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:08Z","lastTransitionTime":"2025-12-12T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.438500 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.438582 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.438601 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.438628 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.438659 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:08Z","lastTransitionTime":"2025-12-12T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.544050 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.544104 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.544119 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.544139 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.544155 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:08Z","lastTransitionTime":"2025-12-12T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.653003 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.653057 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.653090 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.653111 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.653126 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:08Z","lastTransitionTime":"2025-12-12T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.756792 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.756849 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.756862 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.756882 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.756898 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:08Z","lastTransitionTime":"2025-12-12T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.859962 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.860041 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.860053 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.860080 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.860099 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:08Z","lastTransitionTime":"2025-12-12T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.963978 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.964051 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.964066 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.964094 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:08 crc kubenswrapper[4917]: I1212 00:07:08.964111 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:08Z","lastTransitionTime":"2025-12-12T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.067955 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.068002 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.068012 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.068032 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.068049 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:09Z","lastTransitionTime":"2025-12-12T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.171889 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.171935 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.171944 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.171965 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.171979 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:09Z","lastTransitionTime":"2025-12-12T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.275405 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.275476 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.275493 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.275518 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.275531 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:09Z","lastTransitionTime":"2025-12-12T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.378951 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.378999 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.379010 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.379033 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.379045 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:09Z","lastTransitionTime":"2025-12-12T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.481562 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.481611 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.481624 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.481660 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.481674 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:09Z","lastTransitionTime":"2025-12-12T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.584313 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.584358 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.584367 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.584386 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.584396 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:09Z","lastTransitionTime":"2025-12-12T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.601827 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.601924 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.601841 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.602006 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:09 crc kubenswrapper[4917]: E1212 00:07:09.602090 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:09 crc kubenswrapper[4917]: E1212 00:07:09.602150 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:09 crc kubenswrapper[4917]: E1212 00:07:09.602214 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:09 crc kubenswrapper[4917]: E1212 00:07:09.602341 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.686918 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.686963 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.686973 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.686987 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.686998 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:09Z","lastTransitionTime":"2025-12-12T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.790284 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.790331 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.790345 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.790370 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.790382 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:09Z","lastTransitionTime":"2025-12-12T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.894000 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.894078 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.894095 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.894119 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:09 crc kubenswrapper[4917]: I1212 00:07:09.894131 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:09Z","lastTransitionTime":"2025-12-12T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.002517 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.002584 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.002598 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.002624 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.002659 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:10Z","lastTransitionTime":"2025-12-12T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.105411 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.105467 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.105480 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.105501 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.105515 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:10Z","lastTransitionTime":"2025-12-12T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.207899 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.207946 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.207961 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.207984 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.207997 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:10Z","lastTransitionTime":"2025-12-12T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.312213 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.312277 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.312291 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.312314 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.312333 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:10Z","lastTransitionTime":"2025-12-12T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.414936 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.414991 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.415005 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.415027 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.415043 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:10Z","lastTransitionTime":"2025-12-12T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.518483 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.518549 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.518564 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.518587 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.518605 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:10Z","lastTransitionTime":"2025-12-12T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.620956 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.621024 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.621036 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.621061 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.621079 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:10Z","lastTransitionTime":"2025-12-12T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.723667 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.723695 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.723705 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.723719 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.723728 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:10Z","lastTransitionTime":"2025-12-12T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.826713 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.827246 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.827259 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.827282 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.827296 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:10Z","lastTransitionTime":"2025-12-12T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.930975 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.931037 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.931051 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.931071 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:10 crc kubenswrapper[4917]: I1212 00:07:10.931083 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:10Z","lastTransitionTime":"2025-12-12T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.033928 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.033962 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.033971 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.033983 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.033992 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:11Z","lastTransitionTime":"2025-12-12T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.137135 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.137203 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.137215 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.137235 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.137248 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:11Z","lastTransitionTime":"2025-12-12T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.240913 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.241128 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.241151 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.241177 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.241192 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:11Z","lastTransitionTime":"2025-12-12T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.344254 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.344291 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.344302 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.344319 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.344331 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:11Z","lastTransitionTime":"2025-12-12T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.447120 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.447169 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.447178 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.447197 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.447211 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:11Z","lastTransitionTime":"2025-12-12T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.549959 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.550015 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.550027 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.550049 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.550063 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:11Z","lastTransitionTime":"2025-12-12T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.601289 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:11 crc kubenswrapper[4917]: E1212 00:07:11.601506 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.601621 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.601678 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:11 crc kubenswrapper[4917]: E1212 00:07:11.601747 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:11 crc kubenswrapper[4917]: E1212 00:07:11.601902 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.602074 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:11 crc kubenswrapper[4917]: E1212 00:07:11.603267 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.652917 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.652969 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.652980 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.652997 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.653009 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:11Z","lastTransitionTime":"2025-12-12T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.755826 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.755869 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.755880 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.755900 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.755911 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:11Z","lastTransitionTime":"2025-12-12T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.858471 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.858505 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.858513 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.858527 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.858535 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:11Z","lastTransitionTime":"2025-12-12T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.960809 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.960850 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.960861 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.960876 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:11 crc kubenswrapper[4917]: I1212 00:07:11.960887 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:11Z","lastTransitionTime":"2025-12-12T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.040079 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" probeResult="failure" output="" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.063634 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.063731 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.063746 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.063770 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.063786 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:12Z","lastTransitionTime":"2025-12-12T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.166069 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.166122 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.166133 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.166154 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.166174 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:12Z","lastTransitionTime":"2025-12-12T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.267988 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.268028 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.268041 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.268059 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.268071 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:12Z","lastTransitionTime":"2025-12-12T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.370680 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.370737 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.370749 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.370768 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.370785 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:12Z","lastTransitionTime":"2025-12-12T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.472686 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.472728 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.472739 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.472762 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.472774 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:12Z","lastTransitionTime":"2025-12-12T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.575116 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.575167 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.575192 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.575219 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.575236 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:12Z","lastTransitionTime":"2025-12-12T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.679689 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.679744 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.679757 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.679780 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.679791 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:12Z","lastTransitionTime":"2025-12-12T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.783833 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.783883 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.783922 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.783944 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.783959 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:12Z","lastTransitionTime":"2025-12-12T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.886953 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.886996 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.887010 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.887028 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.887043 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:12Z","lastTransitionTime":"2025-12-12T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.990696 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.990769 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.990782 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.990833 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:12 crc kubenswrapper[4917]: I1212 00:07:12.990902 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:12Z","lastTransitionTime":"2025-12-12T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.093383 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.093436 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.093456 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.093479 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.093496 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:13Z","lastTransitionTime":"2025-12-12T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.196349 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.196431 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.196448 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.196493 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.196510 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:13Z","lastTransitionTime":"2025-12-12T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.298804 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.298843 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.298852 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.298869 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.298877 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:13Z","lastTransitionTime":"2025-12-12T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.402308 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.402375 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.402392 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.402419 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.402440 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:13Z","lastTransitionTime":"2025-12-12T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.505239 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.505311 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.505324 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.505482 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.505542 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:13Z","lastTransitionTime":"2025-12-12T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.601537 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.601566 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:13 crc kubenswrapper[4917]: E1212 00:07:13.601725 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.601757 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.601810 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:13 crc kubenswrapper[4917]: E1212 00:07:13.601904 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:13 crc kubenswrapper[4917]: E1212 00:07:13.601844 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:13 crc kubenswrapper[4917]: E1212 00:07:13.602031 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.607447 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.607508 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.607520 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.607536 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.607548 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:13Z","lastTransitionTime":"2025-12-12T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.710682 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.710732 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.710749 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.710772 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.710789 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:13Z","lastTransitionTime":"2025-12-12T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.813954 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.814016 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.814030 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.814054 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.814071 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:13Z","lastTransitionTime":"2025-12-12T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.916393 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.916458 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.916466 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.916499 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:13 crc kubenswrapper[4917]: I1212 00:07:13.916512 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:13Z","lastTransitionTime":"2025-12-12T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.018742 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.018782 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.018791 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.018806 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.018823 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:14Z","lastTransitionTime":"2025-12-12T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.121569 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.121678 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.121693 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.121707 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.121718 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:14Z","lastTransitionTime":"2025-12-12T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.225324 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.225368 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.225381 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.225430 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.225442 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:14Z","lastTransitionTime":"2025-12-12T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.328721 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.328800 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.328810 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.328832 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.328845 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:14Z","lastTransitionTime":"2025-12-12T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.432047 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.432095 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.432107 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.432124 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.432135 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:14Z","lastTransitionTime":"2025-12-12T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.535164 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.535228 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.535242 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.535261 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.535273 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:14Z","lastTransitionTime":"2025-12-12T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.638060 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.638106 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.638120 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.638138 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.638151 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:14Z","lastTransitionTime":"2025-12-12T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.741339 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.741389 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.741406 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.741428 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.741439 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:14Z","lastTransitionTime":"2025-12-12T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.844478 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.844523 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.844533 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.844551 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.844568 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:14Z","lastTransitionTime":"2025-12-12T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.947917 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.947969 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.947978 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.947992 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:14 crc kubenswrapper[4917]: I1212 00:07:14.948004 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:14Z","lastTransitionTime":"2025-12-12T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.050792 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.050845 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.050860 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.050883 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.050896 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:15Z","lastTransitionTime":"2025-12-12T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.154524 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.154560 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.154568 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.154581 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.154592 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:15Z","lastTransitionTime":"2025-12-12T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.258902 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.258960 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.258974 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.258998 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.259012 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:15Z","lastTransitionTime":"2025-12-12T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.361739 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.361783 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.361793 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.361811 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.361824 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:15Z","lastTransitionTime":"2025-12-12T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.464319 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.464371 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.464383 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.464403 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.464417 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:15Z","lastTransitionTime":"2025-12-12T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.567906 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.567977 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.568002 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.568030 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.568051 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:15Z","lastTransitionTime":"2025-12-12T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.601857 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:15 crc kubenswrapper[4917]: E1212 00:07:15.602015 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.602046 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.602214 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:15 crc kubenswrapper[4917]: E1212 00:07:15.602216 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:15 crc kubenswrapper[4917]: E1212 00:07:15.602274 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.602412 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:15 crc kubenswrapper[4917]: E1212 00:07:15.602464 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.624745 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.641917 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.665025 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.670228 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.670282 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.670294 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.670316 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.670331 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:15Z","lastTransitionTime":"2025-12-12T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.677712 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.688107 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.703007 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.719333 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.734701 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.748192 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.763912 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.773451 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.773503 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.773514 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.773532 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.773545 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:15Z","lastTransitionTime":"2025-12-12T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.777061 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd3b1e30-806e-4e60-8457-5a8dc8255b49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db3014d28fbd4ad947b0bb07b8e2cdd07a9a42923f12a89dafbb482228861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf369a1cb2eaf2ca16299cd6a8e314ae2693ede79e120eaae657dcfc7c1629c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ac02655901d9d729b6cf2cccf17ed4104f1d1f568d813c68920686068db586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.794251 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.807918 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.821096 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.836561 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.853834 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.867682 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.876532 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.876625 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.876655 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.876723 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.876745 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:15Z","lastTransitionTime":"2025-12-12T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.915724 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:15 crc kubenswrapper[4917]: E1212 00:07:15.916138 4917 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:07:15 crc kubenswrapper[4917]: E1212 00:07:15.916303 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs podName:58f4853b-9736-4a03-8c86-1627cb51acbe nodeName:}" failed. No retries permitted until 2025-12-12 00:07:47.916279924 +0000 UTC m=+102.694080737 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs") pod "network-metrics-daemon-f4t96" (UID: "58f4853b-9736-4a03-8c86-1627cb51acbe") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.980411 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.980465 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.980477 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.980498 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:15 crc kubenswrapper[4917]: I1212 00:07:15.980510 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:15Z","lastTransitionTime":"2025-12-12T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.083692 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.083761 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.083780 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.083805 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.083822 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:16Z","lastTransitionTime":"2025-12-12T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.187077 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.187180 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.187190 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.187216 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.187231 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:16Z","lastTransitionTime":"2025-12-12T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.290822 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.290878 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.290889 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.290917 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.290931 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:16Z","lastTransitionTime":"2025-12-12T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.393578 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.393615 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.393624 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.393660 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.393681 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:16Z","lastTransitionTime":"2025-12-12T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.496218 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.496275 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.496289 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.496313 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.496332 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:16Z","lastTransitionTime":"2025-12-12T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.599328 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.599384 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.599392 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.599406 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.599418 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:16Z","lastTransitionTime":"2025-12-12T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.702371 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.702413 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.702424 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.702441 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.702453 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:16Z","lastTransitionTime":"2025-12-12T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.811779 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.811837 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.811845 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.811864 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.811876 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:16Z","lastTransitionTime":"2025-12-12T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.916002 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.916084 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.916096 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.916120 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:16 crc kubenswrapper[4917]: I1212 00:07:16.916134 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:16Z","lastTransitionTime":"2025-12-12T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.018936 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.018974 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.018983 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.018997 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.019005 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.055135 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24mnq_7ee00e08-bb29-427d-9de3-6b0616e409fe/kube-multus/0.log" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.055189 4917 generic.go:334] "Generic (PLEG): container finished" podID="7ee00e08-bb29-427d-9de3-6b0616e409fe" containerID="81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4" exitCode=1 Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.055227 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24mnq" event={"ID":"7ee00e08-bb29-427d-9de3-6b0616e409fe","Type":"ContainerDied","Data":"81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4"} Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.055729 4917 scope.go:117] "RemoveContainer" containerID="81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.069922 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.082598 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.093419 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.107036 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.117849 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.121111 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.121142 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.121155 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.121171 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.121183 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.130413 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd3b1e30-806e-4e60-8457-5a8dc8255b49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db3014d28fbd4ad947b0bb07b8e2cdd07a9a42923f12a89dafbb482228861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf369a1cb2eaf2ca16299cd6a8e314ae2693ede79e120eaae657dcfc7c1629c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ac02655901d9d729b6cf2cccf17ed4104f1d1f568d813c68920686068db586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.142238 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.154501 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.167295 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.181259 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:07:16Z\\\",\\\"message\\\":\\\"2025-12-12T00:06:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_97eaa2e1-a0a6-4a6a-87e4-6356f1922b2a\\\\n2025-12-12T00:06:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_97eaa2e1-a0a6-4a6a-87e4-6356f1922b2a to /host/opt/cni/bin/\\\\n2025-12-12T00:06:31Z [verbose] multus-daemon started\\\\n2025-12-12T00:06:31Z [verbose] Readiness Indicator file check\\\\n2025-12-12T00:07:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.192815 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.207964 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.222708 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.223944 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.223997 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.224009 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.224025 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.224035 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.245225 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.261188 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.274554 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.288500 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.327087 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.327129 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.327138 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.327157 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.327169 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.429463 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.429526 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.429537 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.429555 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.429566 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.532166 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.532207 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.532220 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.532237 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.532250 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.602035 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:17 crc kubenswrapper[4917]: E1212 00:07:17.602163 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.602338 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:17 crc kubenswrapper[4917]: E1212 00:07:17.602384 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.602509 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:17 crc kubenswrapper[4917]: E1212 00:07:17.602559 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.602868 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:17 crc kubenswrapper[4917]: E1212 00:07:17.602927 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.620421 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.620469 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.620479 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.620494 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.620505 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: E1212 00:07:17.634264 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.638637 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.638698 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.638709 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.638727 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.638738 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: E1212 00:07:17.654612 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.658815 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.658873 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.658884 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.658901 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.658912 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: E1212 00:07:17.670535 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.674792 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.674823 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.674833 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.674848 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.674860 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: E1212 00:07:17.685444 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.689275 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.689312 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.689321 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.689336 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.689349 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: E1212 00:07:17.700178 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:17 crc kubenswrapper[4917]: E1212 00:07:17.700338 4917 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.701916 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.701941 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.701953 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.701970 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.701982 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.804581 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.804624 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.804635 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.804665 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.804685 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.911423 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.911505 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.911521 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.911550 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:17 crc kubenswrapper[4917]: I1212 00:07:17.911565 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:17Z","lastTransitionTime":"2025-12-12T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.013954 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.014045 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.014085 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.014111 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.014314 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:18Z","lastTransitionTime":"2025-12-12T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.062919 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24mnq_7ee00e08-bb29-427d-9de3-6b0616e409fe/kube-multus/0.log" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.062998 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24mnq" event={"ID":"7ee00e08-bb29-427d-9de3-6b0616e409fe","Type":"ContainerStarted","Data":"0c68257e5dd1d97628cb53c884e963ded61b1a597be47717aceb3b97fde8f979"} Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.078089 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.089822 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.115906 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.116831 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.116854 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.116863 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.116881 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.116895 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:18Z","lastTransitionTime":"2025-12-12T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.127552 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.138848 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.151608 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.163690 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.173727 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.185714 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.195338 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.209138 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd3b1e30-806e-4e60-8457-5a8dc8255b49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db3014d28fbd4ad947b0bb07b8e2cdd07a9a42923f12a89dafbb482228861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf369a1cb2eaf2ca16299cd6a8e314ae2693ede79e120eaae657dcfc7c1629c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ac02655901d9d729b6cf2cccf17ed4104f1d1f568d813c68920686068db586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.220201 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.220237 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.220276 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.220298 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.220311 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:18Z","lastTransitionTime":"2025-12-12T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.220391 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.232622 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.243073 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.259819 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c68257e5dd1d97628cb53c884e963ded61b1a597be47717aceb3b97fde8f979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:07:16Z\\\",\\\"message\\\":\\\"2025-12-12T00:06:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_97eaa2e1-a0a6-4a6a-87e4-6356f1922b2a\\\\n2025-12-12T00:06:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_97eaa2e1-a0a6-4a6a-87e4-6356f1922b2a to /host/opt/cni/bin/\\\\n2025-12-12T00:06:31Z [verbose] multus-daemon started\\\\n2025-12-12T00:06:31Z [verbose] Readiness Indicator file check\\\\n2025-12-12T00:07:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.278098 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.288439 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:18Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.322804 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.322851 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.322864 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.322882 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.322894 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:18Z","lastTransitionTime":"2025-12-12T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.427455 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.427528 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.427546 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.427588 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.427625 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:18Z","lastTransitionTime":"2025-12-12T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.531150 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.531208 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.531226 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.531254 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.531272 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:18Z","lastTransitionTime":"2025-12-12T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.634291 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.634340 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.634353 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.634374 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.634389 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:18Z","lastTransitionTime":"2025-12-12T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.737097 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.737134 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.737143 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.737155 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.737165 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:18Z","lastTransitionTime":"2025-12-12T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.839484 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.839535 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.839545 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.839558 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.839568 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:18Z","lastTransitionTime":"2025-12-12T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.943162 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.943218 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.943232 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.943251 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:18 crc kubenswrapper[4917]: I1212 00:07:18.943264 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:18Z","lastTransitionTime":"2025-12-12T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.046304 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.046349 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.046357 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.046375 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.046385 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:19Z","lastTransitionTime":"2025-12-12T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.148727 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.148768 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.148779 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.148794 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.148805 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:19Z","lastTransitionTime":"2025-12-12T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.252070 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.252114 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.252122 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.252140 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.252149 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:19Z","lastTransitionTime":"2025-12-12T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.355528 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.355577 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.355589 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.355608 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.355621 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:19Z","lastTransitionTime":"2025-12-12T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.458086 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.458156 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.458173 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.458199 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.458216 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:19Z","lastTransitionTime":"2025-12-12T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.560770 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.560839 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.560891 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.560918 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.560938 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:19Z","lastTransitionTime":"2025-12-12T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.601499 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.601610 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:19 crc kubenswrapper[4917]: E1212 00:07:19.601686 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.601723 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:19 crc kubenswrapper[4917]: E1212 00:07:19.601835 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.601856 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:19 crc kubenswrapper[4917]: E1212 00:07:19.602001 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:19 crc kubenswrapper[4917]: E1212 00:07:19.602095 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.663140 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.663192 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.663208 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.663225 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.663238 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:19Z","lastTransitionTime":"2025-12-12T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.766372 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.766416 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.766425 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.766441 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.766450 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:19Z","lastTransitionTime":"2025-12-12T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.868471 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.868507 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.868524 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.868546 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.868557 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:19Z","lastTransitionTime":"2025-12-12T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.971868 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.971959 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.971972 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.971995 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:19 crc kubenswrapper[4917]: I1212 00:07:19.972007 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:19Z","lastTransitionTime":"2025-12-12T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.074621 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.074721 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.074736 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.074757 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.074794 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:20Z","lastTransitionTime":"2025-12-12T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.177192 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.177234 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.177244 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.177260 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.177270 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:20Z","lastTransitionTime":"2025-12-12T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.279502 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.279553 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.279564 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.279580 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.279594 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:20Z","lastTransitionTime":"2025-12-12T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.382308 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.382346 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.382358 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.382380 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.382392 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:20Z","lastTransitionTime":"2025-12-12T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.485104 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.485169 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.485182 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.485198 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.485209 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:20Z","lastTransitionTime":"2025-12-12T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.588007 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.588047 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.588055 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.588070 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.588079 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:20Z","lastTransitionTime":"2025-12-12T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.690768 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.690805 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.690817 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.690854 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.690870 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:20Z","lastTransitionTime":"2025-12-12T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.793887 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.794181 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.794353 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.794459 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.794543 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:20Z","lastTransitionTime":"2025-12-12T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.896848 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.896890 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.896903 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.896918 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.896927 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:20Z","lastTransitionTime":"2025-12-12T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.999090 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.999123 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.999130 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.999144 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:20 crc kubenswrapper[4917]: I1212 00:07:20.999152 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:20Z","lastTransitionTime":"2025-12-12T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.101994 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.102266 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.102394 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.102492 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.102577 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:21Z","lastTransitionTime":"2025-12-12T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.206745 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.207121 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.207307 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.207704 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.208049 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:21Z","lastTransitionTime":"2025-12-12T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.311062 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.311372 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.311452 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.311546 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.311623 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:21Z","lastTransitionTime":"2025-12-12T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.413527 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.413560 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.413573 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.413587 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.413598 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:21Z","lastTransitionTime":"2025-12-12T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.515560 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.515601 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.515609 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.515626 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.515637 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:21Z","lastTransitionTime":"2025-12-12T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.601834 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:21 crc kubenswrapper[4917]: E1212 00:07:21.602426 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.602822 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.603029 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:21 crc kubenswrapper[4917]: E1212 00:07:21.603420 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:21 crc kubenswrapper[4917]: E1212 00:07:21.603205 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.603078 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:21 crc kubenswrapper[4917]: E1212 00:07:21.603527 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.618168 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.618241 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.618260 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.618288 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.618308 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:21Z","lastTransitionTime":"2025-12-12T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.721283 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.721607 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.721776 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.722029 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.722248 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:21Z","lastTransitionTime":"2025-12-12T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.825322 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.825364 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.825372 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.825414 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.825426 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:21Z","lastTransitionTime":"2025-12-12T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.928170 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.928225 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.928242 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.928266 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:21 crc kubenswrapper[4917]: I1212 00:07:21.928283 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:21Z","lastTransitionTime":"2025-12-12T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.030997 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.031040 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.031051 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.031068 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.031079 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:22Z","lastTransitionTime":"2025-12-12T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.133597 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.133910 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.133985 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.134085 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.134173 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:22Z","lastTransitionTime":"2025-12-12T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.237689 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.237971 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.238046 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.238118 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.238173 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:22Z","lastTransitionTime":"2025-12-12T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.341062 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.341120 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.341135 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.341158 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.341170 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:22Z","lastTransitionTime":"2025-12-12T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.443964 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.444010 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.444020 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.444033 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.444042 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:22Z","lastTransitionTime":"2025-12-12T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.546984 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.547053 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.547065 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.547086 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.547098 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:22Z","lastTransitionTime":"2025-12-12T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.649923 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.649989 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.650000 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.650019 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.650031 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:22Z","lastTransitionTime":"2025-12-12T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.751733 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.751807 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.751824 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.751843 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.751855 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:22Z","lastTransitionTime":"2025-12-12T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.854887 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.854961 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.854984 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.855013 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.855037 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:22Z","lastTransitionTime":"2025-12-12T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.957708 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.957762 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.957773 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.957793 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:22 crc kubenswrapper[4917]: I1212 00:07:22.957807 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:22Z","lastTransitionTime":"2025-12-12T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.060313 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.060368 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.060383 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.060403 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.060418 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:23Z","lastTransitionTime":"2025-12-12T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.162763 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.162838 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.162862 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.162889 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.162910 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:23Z","lastTransitionTime":"2025-12-12T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.264941 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.264985 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.265005 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.265024 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.265036 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:23Z","lastTransitionTime":"2025-12-12T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.367856 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.367917 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.367930 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.367950 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.367962 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:23Z","lastTransitionTime":"2025-12-12T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.471094 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.471148 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.471165 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.471188 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.471204 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:23Z","lastTransitionTime":"2025-12-12T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.574058 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.574110 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.574127 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.574152 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.574169 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:23Z","lastTransitionTime":"2025-12-12T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.600974 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.601015 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.601032 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:23 crc kubenswrapper[4917]: E1212 00:07:23.601212 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.601244 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:23 crc kubenswrapper[4917]: E1212 00:07:23.601454 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:23 crc kubenswrapper[4917]: E1212 00:07:23.601589 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:23 crc kubenswrapper[4917]: E1212 00:07:23.601735 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.677594 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.677661 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.677679 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.677697 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.677710 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:23Z","lastTransitionTime":"2025-12-12T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.781266 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.781740 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.781945 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.782129 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.782302 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:23Z","lastTransitionTime":"2025-12-12T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.885206 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.885274 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.885297 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.885323 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.885347 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:23Z","lastTransitionTime":"2025-12-12T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.987067 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.987306 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.987382 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.987476 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:23 crc kubenswrapper[4917]: I1212 00:07:23.987531 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:23Z","lastTransitionTime":"2025-12-12T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.083114 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/2.log" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.083891 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/1.log" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.086885 4917 generic.go:334] "Generic (PLEG): container finished" podID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerID="25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15" exitCode=1 Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.086928 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15"} Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.086968 4917 scope.go:117] "RemoveContainer" containerID="10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.087742 4917 scope.go:117] "RemoveContainer" containerID="25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15" Dec 12 00:07:24 crc kubenswrapper[4917]: E1212 00:07:24.087871 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-26hjd_openshift-ovn-kubernetes(c740630c-23cb-4c02-ab4e-bac3d773dce4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.092521 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.092869 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.093093 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.093292 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.093490 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:24Z","lastTransitionTime":"2025-12-12T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.101555 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.113473 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.124120 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.136161 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.148303 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.161079 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c68257e5dd1d97628cb53c884e963ded61b1a597be47717aceb3b97fde8f979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:07:16Z\\\",\\\"message\\\":\\\"2025-12-12T00:06:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_97eaa2e1-a0a6-4a6a-87e4-6356f1922b2a\\\\n2025-12-12T00:06:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_97eaa2e1-a0a6-4a6a-87e4-6356f1922b2a to /host/opt/cni/bin/\\\\n2025-12-12T00:06:31Z [verbose] multus-daemon started\\\\n2025-12-12T00:06:31Z [verbose] Readiness Indicator file check\\\\n2025-12-12T00:07:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.185470 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.197722 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.197944 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.198002 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.198015 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.198028 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.198039 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:24Z","lastTransitionTime":"2025-12-12T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.210172 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd3b1e30-806e-4e60-8457-5a8dc8255b49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db3014d28fbd4ad947b0bb07b8e2cdd07a9a42923f12a89dafbb482228861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf369a1cb2eaf2ca16299cd6a8e314ae2693ede79e120eaae657dcfc7c1629c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ac02655901d9d729b6cf2cccf17ed4104f1d1f568d813c68920686068db586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.225935 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.236111 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.248885 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.259790 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.275906 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:07:23Z\\\",\\\"message\\\":\\\"56 6611 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr: failed to check if pod openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr is in primary UDN: could not find OVN pod annotation in map[]\\\\nI1212 00:07:22.529894 6611 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-nplkn: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-nplkn is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI1212 00:07:22.529928 6611 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-dns-operator/dns-operator-744455d44c-w7hp2: failed to check if pod openshift-dns-operator/dns-operator-744455d44c-w7hp2 is in primary UDN: could not find OVN pod annotation in map[]\\\\nE1212 00:07:22.604565 6611 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI1212 00:07:22.605711 6611 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:07:22.605753 6611 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.286711 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.297743 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.301238 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.301283 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.301301 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.301323 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.301338 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:24Z","lastTransitionTime":"2025-12-12T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.312770 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.404037 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.404112 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.404133 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.404200 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.404223 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:24Z","lastTransitionTime":"2025-12-12T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.507685 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.507751 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.507769 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.507798 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.507816 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:24Z","lastTransitionTime":"2025-12-12T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.610470 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.610529 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.610545 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.610567 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.610584 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:24Z","lastTransitionTime":"2025-12-12T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.713248 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.713297 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.713309 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.713325 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.713335 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:24Z","lastTransitionTime":"2025-12-12T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.815371 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.815417 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.815430 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.815446 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.815458 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:24Z","lastTransitionTime":"2025-12-12T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.918010 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.918054 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.918084 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.918100 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:24 crc kubenswrapper[4917]: I1212 00:07:24.918110 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:24Z","lastTransitionTime":"2025-12-12T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.021050 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.021110 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.021122 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.021145 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.021158 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:25Z","lastTransitionTime":"2025-12-12T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.093068 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/2.log" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.123454 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.123489 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.123499 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.123534 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.123547 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:25Z","lastTransitionTime":"2025-12-12T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.226507 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.226574 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.226592 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.226617 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.226635 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:25Z","lastTransitionTime":"2025-12-12T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.329492 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.329550 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.329570 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.329596 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.329614 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:25Z","lastTransitionTime":"2025-12-12T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.432814 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.432855 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.432867 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.432885 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.432898 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:25Z","lastTransitionTime":"2025-12-12T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.535470 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.535522 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.535534 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.535552 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.535564 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:25Z","lastTransitionTime":"2025-12-12T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.601421 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.601472 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.601423 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:25 crc kubenswrapper[4917]: E1212 00:07:25.601586 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.601424 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:25 crc kubenswrapper[4917]: E1212 00:07:25.601659 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:25 crc kubenswrapper[4917]: E1212 00:07:25.601789 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:25 crc kubenswrapper[4917]: E1212 00:07:25.601887 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.620523 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.633581 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.637731 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.637773 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.637785 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.637803 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.637820 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:25Z","lastTransitionTime":"2025-12-12T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.651225 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:07:23Z\\\",\\\"message\\\":\\\"56 6611 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr: failed to check if pod openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr is in primary UDN: could not find OVN pod annotation in map[]\\\\nI1212 00:07:22.529894 6611 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-nplkn: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-nplkn is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI1212 00:07:22.529928 6611 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-dns-operator/dns-operator-744455d44c-w7hp2: failed to check if pod openshift-dns-operator/dns-operator-744455d44c-w7hp2 is in primary UDN: could not find OVN pod annotation in map[]\\\\nE1212 00:07:22.604565 6611 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI1212 00:07:22.605711 6611 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:07:22.605753 6611 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.664720 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.684944 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.718135 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.739085 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.740277 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.740330 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.740345 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.740364 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.740377 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:25Z","lastTransitionTime":"2025-12-12T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.754170 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.766283 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.777765 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.791307 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c68257e5dd1d97628cb53c884e963ded61b1a597be47717aceb3b97fde8f979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:07:16Z\\\",\\\"message\\\":\\\"2025-12-12T00:06:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_97eaa2e1-a0a6-4a6a-87e4-6356f1922b2a\\\\n2025-12-12T00:06:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_97eaa2e1-a0a6-4a6a-87e4-6356f1922b2a to /host/opt/cni/bin/\\\\n2025-12-12T00:06:31Z [verbose] multus-daemon started\\\\n2025-12-12T00:06:31Z [verbose] Readiness Indicator file check\\\\n2025-12-12T00:07:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.807417 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.819361 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.833471 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd3b1e30-806e-4e60-8457-5a8dc8255b49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db3014d28fbd4ad947b0bb07b8e2cdd07a9a42923f12a89dafbb482228861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf369a1cb2eaf2ca16299cd6a8e314ae2693ede79e120eaae657dcfc7c1629c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ac02655901d9d729b6cf2cccf17ed4104f1d1f568d813c68920686068db586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.842673 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.842718 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.842731 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.842749 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.842762 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:25Z","lastTransitionTime":"2025-12-12T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.848072 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.861944 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.873637 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.944877 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.944913 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.944925 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.944942 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:25 crc kubenswrapper[4917]: I1212 00:07:25.944956 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:25Z","lastTransitionTime":"2025-12-12T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.047738 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.047993 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.048007 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.048025 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.048035 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:26Z","lastTransitionTime":"2025-12-12T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.150914 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.150983 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.151000 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.151023 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.151040 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:26Z","lastTransitionTime":"2025-12-12T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.254080 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.254162 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.254181 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.254208 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.254228 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:26Z","lastTransitionTime":"2025-12-12T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.356438 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.356485 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.356497 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.356515 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.356529 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:26Z","lastTransitionTime":"2025-12-12T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.459342 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.459404 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.459421 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.459448 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.459465 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:26Z","lastTransitionTime":"2025-12-12T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.561970 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.562036 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.562051 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.562069 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.562082 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:26Z","lastTransitionTime":"2025-12-12T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.665307 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.665779 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.665952 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.666118 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.666267 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:26Z","lastTransitionTime":"2025-12-12T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.769419 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.769466 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.769476 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.769492 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.769503 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:26Z","lastTransitionTime":"2025-12-12T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.873367 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.873794 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.873944 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.874167 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.874344 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:26Z","lastTransitionTime":"2025-12-12T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.978041 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.978458 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.978675 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.978873 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:26 crc kubenswrapper[4917]: I1212 00:07:26.979050 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:26Z","lastTransitionTime":"2025-12-12T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.082120 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.082384 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.082479 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.082575 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.082681 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.186112 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.186148 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.186159 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.186176 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.186188 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.288284 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.288336 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.288349 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.288366 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.288378 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.392107 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.392172 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.392191 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.392212 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.392228 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.448251 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.448366 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.448395 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:31.448366784 +0000 UTC m=+146.226167627 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.448440 4917 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.448464 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.448484 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:08:31.448473756 +0000 UTC m=+146.226274579 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.448671 4917 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.448765 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 00:08:31.448747754 +0000 UTC m=+146.226548577 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.495342 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.495474 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.495512 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.495545 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.495572 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.549140 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.549221 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.549394 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.549412 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.549409 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.549477 4917 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.549425 4917 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.549500 4917 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.549544 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 00:08:31.549529983 +0000 UTC m=+146.327330796 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.549582 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 00:08:31.549555944 +0000 UTC m=+146.327356787 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.597986 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.598065 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.598089 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.598115 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.598132 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.601613 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.601744 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.601756 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.601903 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.601923 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.602065 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.602151 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.602283 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.700930 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.701043 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.701067 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.701098 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.701121 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.751902 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.751968 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.751981 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.752003 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.752016 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.768569 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.773598 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.773663 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.773678 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.773701 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.773718 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.787740 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.792813 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.792863 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.792875 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.792894 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.792907 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.807571 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.812354 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.812480 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.812569 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.812691 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.812798 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.826066 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.830280 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.830309 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.830318 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.830332 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.830341 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.843501 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:27Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:27 crc kubenswrapper[4917]: E1212 00:07:27.843664 4917 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.845655 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.845709 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.845721 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.845742 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.845764 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.947913 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.947965 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.947980 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.948000 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:27 crc kubenswrapper[4917]: I1212 00:07:27.948014 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:27Z","lastTransitionTime":"2025-12-12T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.050591 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.050688 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.050702 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.050730 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.050744 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:28Z","lastTransitionTime":"2025-12-12T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.153610 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.153693 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.153710 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.153727 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.153739 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:28Z","lastTransitionTime":"2025-12-12T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.256982 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.257044 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.257066 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.257095 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.257117 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:28Z","lastTransitionTime":"2025-12-12T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.359596 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.359720 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.359746 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.359775 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.359796 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:28Z","lastTransitionTime":"2025-12-12T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.462388 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.462441 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.462453 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.462470 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.462481 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:28Z","lastTransitionTime":"2025-12-12T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.565049 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.565124 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.565142 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.565171 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.565195 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:28Z","lastTransitionTime":"2025-12-12T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.669120 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.669183 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.669200 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.669225 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.669242 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:28Z","lastTransitionTime":"2025-12-12T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.772427 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.772511 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.772525 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.772552 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.772567 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:28Z","lastTransitionTime":"2025-12-12T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.875166 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.875236 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.875262 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.875292 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.875315 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:28Z","lastTransitionTime":"2025-12-12T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.977557 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.977620 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.977676 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.977703 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:28 crc kubenswrapper[4917]: I1212 00:07:28.977722 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:28Z","lastTransitionTime":"2025-12-12T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.079969 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.080027 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.080090 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.080113 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.080130 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:29Z","lastTransitionTime":"2025-12-12T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.187040 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.187114 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.187126 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.187150 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.187163 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:29Z","lastTransitionTime":"2025-12-12T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.289315 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.289380 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.289397 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.289420 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.289438 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:29Z","lastTransitionTime":"2025-12-12T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.392140 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.392237 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.392255 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.392279 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.392300 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:29Z","lastTransitionTime":"2025-12-12T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.495323 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.495395 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.495411 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.495437 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.495471 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:29Z","lastTransitionTime":"2025-12-12T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.598940 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.599016 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.599068 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.599101 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.599124 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:29Z","lastTransitionTime":"2025-12-12T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.601136 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.601175 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:29 crc kubenswrapper[4917]: E1212 00:07:29.601288 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:29 crc kubenswrapper[4917]: E1212 00:07:29.601326 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.601342 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.601409 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:29 crc kubenswrapper[4917]: E1212 00:07:29.601581 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:29 crc kubenswrapper[4917]: E1212 00:07:29.601690 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.614272 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.701566 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.701723 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.701742 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.701768 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.701786 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:29Z","lastTransitionTime":"2025-12-12T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.806074 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.806128 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.806147 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.806170 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.806187 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:29Z","lastTransitionTime":"2025-12-12T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.909716 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.909764 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.909775 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.909792 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:29 crc kubenswrapper[4917]: I1212 00:07:29.909805 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:29Z","lastTransitionTime":"2025-12-12T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.011975 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.012031 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.012043 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.012062 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.012074 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:30Z","lastTransitionTime":"2025-12-12T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.113976 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.114013 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.114022 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.114036 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.114047 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:30Z","lastTransitionTime":"2025-12-12T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.216488 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.216517 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.216525 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.216538 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.216547 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:30Z","lastTransitionTime":"2025-12-12T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.318859 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.318906 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.318915 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.318932 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.318942 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:30Z","lastTransitionTime":"2025-12-12T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.422057 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.422122 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.422142 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.422168 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.422187 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:30Z","lastTransitionTime":"2025-12-12T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.524418 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.524477 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.524497 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.524519 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.524536 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:30Z","lastTransitionTime":"2025-12-12T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.627725 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.627788 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.627800 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.627819 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.627831 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:30Z","lastTransitionTime":"2025-12-12T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.731494 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.731886 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.731917 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.731945 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.731967 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:30Z","lastTransitionTime":"2025-12-12T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.834925 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.834982 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.834996 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.835020 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.835036 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:30Z","lastTransitionTime":"2025-12-12T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.937688 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.937734 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.937745 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.937761 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:30 crc kubenswrapper[4917]: I1212 00:07:30.937770 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:30Z","lastTransitionTime":"2025-12-12T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.040174 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.040223 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.040234 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.040251 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.040264 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:31Z","lastTransitionTime":"2025-12-12T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.142391 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.142435 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.142447 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.142467 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.142479 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:31Z","lastTransitionTime":"2025-12-12T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.245425 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.245460 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.245469 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.245482 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.245491 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:31Z","lastTransitionTime":"2025-12-12T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.348561 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.348685 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.348713 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.348744 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.348761 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:31Z","lastTransitionTime":"2025-12-12T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.451182 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.451235 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.451247 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.451265 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.451295 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:31Z","lastTransitionTime":"2025-12-12T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.553772 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.553836 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.553851 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.553871 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.553887 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:31Z","lastTransitionTime":"2025-12-12T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.601113 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.601162 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.601166 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.601330 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:31 crc kubenswrapper[4917]: E1212 00:07:31.601328 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:31 crc kubenswrapper[4917]: E1212 00:07:31.601404 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:31 crc kubenswrapper[4917]: E1212 00:07:31.601496 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:31 crc kubenswrapper[4917]: E1212 00:07:31.601620 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.656698 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.656774 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.656798 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.656837 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.656872 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:31Z","lastTransitionTime":"2025-12-12T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.759710 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.759761 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.759774 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.759790 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.759801 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:31Z","lastTransitionTime":"2025-12-12T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.862508 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.862547 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.862557 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.862573 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.862584 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:31Z","lastTransitionTime":"2025-12-12T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.965761 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.965840 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.965857 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.965882 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:31 crc kubenswrapper[4917]: I1212 00:07:31.965899 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:31Z","lastTransitionTime":"2025-12-12T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.068482 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.068542 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.068559 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.068581 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.068597 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:32Z","lastTransitionTime":"2025-12-12T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.171583 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.171617 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.171630 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.171666 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.171678 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:32Z","lastTransitionTime":"2025-12-12T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.275037 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.275097 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.275118 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.275145 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.275166 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:32Z","lastTransitionTime":"2025-12-12T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.378335 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.378403 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.378424 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.378455 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.378478 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:32Z","lastTransitionTime":"2025-12-12T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.482101 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.482159 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.482178 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.482203 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.482222 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:32Z","lastTransitionTime":"2025-12-12T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.584865 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.585189 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.585506 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.585793 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.585922 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:32Z","lastTransitionTime":"2025-12-12T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.615424 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.689213 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.689244 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.689253 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.689272 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.689289 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:32Z","lastTransitionTime":"2025-12-12T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.792434 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.792469 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.792478 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.792492 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.792507 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:32Z","lastTransitionTime":"2025-12-12T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.895341 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.895384 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.895398 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.895414 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.895426 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:32Z","lastTransitionTime":"2025-12-12T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.998046 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.998107 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.998129 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.998152 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:32 crc kubenswrapper[4917]: I1212 00:07:32.998169 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:32Z","lastTransitionTime":"2025-12-12T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.101341 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.101390 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.101402 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.101421 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.101432 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:33Z","lastTransitionTime":"2025-12-12T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.203413 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.203451 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.203462 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.203480 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.203490 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:33Z","lastTransitionTime":"2025-12-12T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.305963 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.306008 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.306017 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.306030 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.306039 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:33Z","lastTransitionTime":"2025-12-12T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.408917 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.408959 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.408969 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.408983 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.408996 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:33Z","lastTransitionTime":"2025-12-12T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.512150 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.512218 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.512230 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.512248 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.512260 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:33Z","lastTransitionTime":"2025-12-12T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.601169 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.601232 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.601232 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:33 crc kubenswrapper[4917]: E1212 00:07:33.601313 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.601398 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:33 crc kubenswrapper[4917]: E1212 00:07:33.601461 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:33 crc kubenswrapper[4917]: E1212 00:07:33.601397 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:33 crc kubenswrapper[4917]: E1212 00:07:33.601526 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.614832 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.614907 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.614924 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.614940 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.614950 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:33Z","lastTransitionTime":"2025-12-12T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.717501 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.717549 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.717559 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.717574 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.717586 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:33Z","lastTransitionTime":"2025-12-12T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.820418 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.820813 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.820826 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.820843 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.820854 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:33Z","lastTransitionTime":"2025-12-12T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.922461 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.922498 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.922507 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.922522 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:33 crc kubenswrapper[4917]: I1212 00:07:33.922533 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:33Z","lastTransitionTime":"2025-12-12T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.024724 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.024775 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.024786 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.024806 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.024817 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:34Z","lastTransitionTime":"2025-12-12T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.127511 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.127578 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.127601 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.127633 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.127690 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:34Z","lastTransitionTime":"2025-12-12T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.229916 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.229961 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.229972 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.229988 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.230000 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:34Z","lastTransitionTime":"2025-12-12T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.332866 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.332913 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.332923 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.332938 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.332948 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:34Z","lastTransitionTime":"2025-12-12T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.435797 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.435860 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.435875 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.435894 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.435906 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:34Z","lastTransitionTime":"2025-12-12T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.538099 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.538144 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.538157 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.538173 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.538185 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:34Z","lastTransitionTime":"2025-12-12T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.641087 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.641154 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.641176 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.641205 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.641227 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:34Z","lastTransitionTime":"2025-12-12T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.744900 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.744940 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.744952 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.744968 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.744979 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:34Z","lastTransitionTime":"2025-12-12T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.847464 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.847520 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.847531 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.847551 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.847565 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:34Z","lastTransitionTime":"2025-12-12T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.950734 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.950789 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.950802 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.950819 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:34 crc kubenswrapper[4917]: I1212 00:07:34.950831 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:34Z","lastTransitionTime":"2025-12-12T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.053541 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.053586 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.053597 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.053615 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.053628 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:35Z","lastTransitionTime":"2025-12-12T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.156992 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.157035 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.157045 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.157060 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.157072 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:35Z","lastTransitionTime":"2025-12-12T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.259508 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.259557 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.259572 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.259593 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.259605 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:35Z","lastTransitionTime":"2025-12-12T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.361745 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.361795 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.361807 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.361822 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.361834 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:35Z","lastTransitionTime":"2025-12-12T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.464886 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.464924 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.464935 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.464951 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.464959 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:35Z","lastTransitionTime":"2025-12-12T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.567817 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.567865 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.567875 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.567896 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.567907 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:35Z","lastTransitionTime":"2025-12-12T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.601802 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.601882 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.601830 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:35 crc kubenswrapper[4917]: E1212 00:07:35.601963 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.602064 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:35 crc kubenswrapper[4917]: E1212 00:07:35.602215 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:35 crc kubenswrapper[4917]: E1212 00:07:35.602278 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:35 crc kubenswrapper[4917]: E1212 00:07:35.602410 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.616008 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmhzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a3ffe88-ff5c-41e9-9824-03044be1c979\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d30db695fabfcf76de6176e7e7d3cc4633241ea21d12162239ace9218c6153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpnzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmhzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.631442 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-24mnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ee00e08-bb29-427d-9de3-6b0616e409fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c68257e5dd1d97628cb53c884e963ded61b1a597be47717aceb3b97fde8f979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:07:16Z\\\",\\\"message\\\":\\\"2025-12-12T00:06:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_97eaa2e1-a0a6-4a6a-87e4-6356f1922b2a\\\\n2025-12-12T00:06:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_97eaa2e1-a0a6-4a6a-87e4-6356f1922b2a to /host/opt/cni/bin/\\\\n2025-12-12T00:06:31Z [verbose] multus-daemon started\\\\n2025-12-12T00:06:31Z [verbose] Readiness Indicator file check\\\\n2025-12-12T00:07:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xj5rw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-24mnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.648780 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75be0c6b-6364-4d5a-9494-25cdbd35ce08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67b457c1fe72499a49de3af534364285ff001c09bd9ce9352500491c902e51f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bf92255344a7d4848c78d475ed8ccdbe61ff0509d4a66057dd839de01ab0845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://653ac1bcc488643cfb67f224f86fa223b9d7cc714c5d4a6147791e674f896da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193063af27344309d1a6454fe947731af1d619a71bd05c9ceba5b8e92a6a2d60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd105a2b84dea765fba8c9a5d2ea96e9f8d8844a42020937e98042ffdecc2c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e9b9088b52e3de8015522506aa5014f9172857465d16d5ac3d70a72a82b9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4d0871d96fdecd5e59e9919d80008a12b3b3dcf1523c714c351c0950ff66b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzck8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qkh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.660088 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tpmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df038132-e4e9-47cf-a5e4-384eff3548db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa2d124555782044df729b3dff9dada2691995e23515e5096aaafd3fc2507d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tpmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.670718 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0da4d2a3-cbfd-4202-8423-5e6d0de197a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048e4e3321ff297e044bfa5ac97a862037024980439ac4c475215dc578c4b542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1b39a06416858eab72e0bd88b65da74a16c8a46029841452ee888b039a127d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f1b39a06416858eab72e0bd88b65da74a16c8a46029841452ee888b039a127d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.670802 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.670838 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.670850 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.670868 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.670880 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:35Z","lastTransitionTime":"2025-12-12T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.684223 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd3b1e30-806e-4e60-8457-5a8dc8255b49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db3014d28fbd4ad947b0bb07b8e2cdd07a9a42923f12a89dafbb482228861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf369a1cb2eaf2ca16299cd6a8e314ae2693ede79e120eaae657dcfc7c1629c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ac02655901d9d729b6cf2cccf17ed4104f1d1f568d813c68920686068db586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19f45cb1d89baa01433ddb5436105f6e177bd4af2e37c9d825d0b6ba6619d954\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.700316 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c1cd59d85364aed242961fa37f5a258ffef0eaaa8bd9f191d9a1e9ecbcbca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda4e9992dc40e586fbe279d16adb5af2bc24a667537c0c1d01fc30f379abe55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.716016 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.727959 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f4t96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f4853b-9736-4a03-8c86-1627cb51acbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4ftb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f4t96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.741804 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84ca9710-d96a-4794-a7a2-d7440ab355e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 00:06:22.961192 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 00:06:22.961417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 00:06:22.962830 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2194209549/tls.crt::/tmp/serving-cert-2194209549/tls.key\\\\\\\"\\\\nI1212 00:06:23.206818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 00:06:23.209209 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 00:06:23.209228 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 00:06:23.209254 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 00:06:23.209260 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 00:06:23.213500 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 00:06:23.213532 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 00:06:23.213542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 00:06:23.213545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 00:06:23.213549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 00:06:23.213552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 00:06:23.213604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 00:06:23.215617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.754106 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf9740bc98f963815fb3f551fd7eeb3f1161b84f5c6b14c1dde269526be190e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.773069 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c740630c-23cb-4c02-ab4e-bac3d773dce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10f34727d81ce2d71c261662f4524ca9e0e0d3fd5075ca82a7bd54728b62fab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"message\\\":\\\"-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 00:06:45.133353 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 00:06:45.133480 6371 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 00:06:45.134180 6371 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 00:06:45.134205 6371 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 00:06:45.134228 6371 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 00:06:45.134235 6371 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 00:06:45.134250 6371 handler.go:208] Removed *v1.Node event handler 7\\\\nI1212 00:06:45.134258 6371 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1212 00:06:45.134268 6371 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1212 00:06:45.134317 6371 factory.go:656] Stopping watch factory\\\\nI1212 00:06:45.134341 6371 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:06:45.134342 6371 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 00:06:45.134355 6371 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1212 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T00:07:23Z\\\",\\\"message\\\":\\\"56 6611 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr: failed to check if pod openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr is in primary UDN: could not find OVN pod annotation in map[]\\\\nI1212 00:07:22.529894 6611 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-nplkn: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-nplkn is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI1212 00:07:22.529928 6611 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-dns-operator/dns-operator-744455d44c-w7hp2: failed to check if pod openshift-dns-operator/dns-operator-744455d44c-w7hp2 is in primary UDN: could not find OVN pod annotation in map[]\\\\nE1212 00:07:22.604565 6611 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI1212 00:07:22.605711 6611 ovnkube.go:599] Stopped ovnkube\\\\nI1212 00:07:22.605753 6611 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9gct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-26hjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.773723 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.773758 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.773767 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.773785 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.773796 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:35Z","lastTransitionTime":"2025-12-12T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.789417 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30634f44-c994-4857-b96d-93377817d2e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68ff6511f714260aeef29501dc9e9184549ec6a72fc393b20a09b1b110efa73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29a7dcaca84df604fdc984b3dbc207b38699381b6d4cb75f991801133a016e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648aec67877ed328734e9cfe81a3e45055c194910a2be56957bde5998abb3f1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.801801 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.814822 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb8c0dbd9f632ccc5487525b78bed86aa57563cbcdc828f3dcd76c7b670ad69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.833901 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38bce33-2b8a-4640-a64a-a87485fa52e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8447e16d8e6aa7cd7f49dcdb9139cf311ade5aac1561dd894f1040cc0d352ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02716ba394064a78d2f67241000e581a3b57a77d349ebe84c178d56763575595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee374fe9b945813c4b4c4d0c4520666a3143d70cb53f579b58ae306ea56587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56ed1ba84a34f4c73c146f16b267782610e835847596d4a4795cf0ed3144001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c58f5a302aac53beaefd1b6e85f1ab6bd511fa08841118a9b55149065aee47aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0bfd3a18c72a0be214ef2f11fe764bd88d87e6f65d6a021c6865727e614d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0bfd3a18c72a0be214ef2f11fe764bd88d87e6f65d6a021c6865727e614d12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea6d1f2a58a7452e54ed0ef0b4409a20d4b3fe44fe176b0a8d52602a2f972be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ea6d1f2a58a7452e54ed0ef0b4409a20d4b3fe44fe176b0a8d52602a2f972be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://debe388b959f90a964de31373a32ce4a671b7d8e7ebc061cf9de7723e1856ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://debe388b959f90a964de31373a32ce4a671b7d8e7ebc061cf9de7723e1856ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.845597 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.856328 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.866424 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.876112 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.876159 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.876168 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.876183 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.876194 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:35Z","lastTransitionTime":"2025-12-12T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.981255 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.981357 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.981384 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.981416 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:35 crc kubenswrapper[4917]: I1212 00:07:35.981450 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:35Z","lastTransitionTime":"2025-12-12T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.084339 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.084390 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.084405 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.084429 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.084445 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:36Z","lastTransitionTime":"2025-12-12T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.187030 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.187126 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.187151 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.187183 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.187203 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:36Z","lastTransitionTime":"2025-12-12T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.289917 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.290014 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.290040 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.290076 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.290098 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:36Z","lastTransitionTime":"2025-12-12T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.392564 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.392622 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.392639 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.392725 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.392748 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:36Z","lastTransitionTime":"2025-12-12T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.495827 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.495886 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.495897 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.495915 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.495932 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:36Z","lastTransitionTime":"2025-12-12T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.598465 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.598514 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.598530 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.598550 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.598561 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:36Z","lastTransitionTime":"2025-12-12T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.700083 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.700120 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.700130 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.700146 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.700157 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:36Z","lastTransitionTime":"2025-12-12T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.802071 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.802130 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.802141 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.802159 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.802170 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:36Z","lastTransitionTime":"2025-12-12T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.905090 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.905153 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.905164 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.905186 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:36 crc kubenswrapper[4917]: I1212 00:07:36.905202 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:36Z","lastTransitionTime":"2025-12-12T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.007429 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.007477 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.007488 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.007506 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.007520 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.110338 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.110382 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.110395 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.110413 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.110424 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.212555 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.212600 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.212612 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.212628 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.212667 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.314843 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.314885 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.314895 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.314912 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.314922 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.417437 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.417477 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.417489 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.417504 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.417514 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.520172 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.520224 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.520236 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.520251 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.520262 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.601246 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.601332 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:37 crc kubenswrapper[4917]: E1212 00:07:37.601429 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:37 crc kubenswrapper[4917]: E1212 00:07:37.601557 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.601279 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:37 crc kubenswrapper[4917]: E1212 00:07:37.601701 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.601765 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:37 crc kubenswrapper[4917]: E1212 00:07:37.601831 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.623295 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.623337 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.623350 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.623365 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.623378 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.725846 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.725877 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.725885 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.725898 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.725907 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.828899 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.828962 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.828976 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.829025 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.829041 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.876510 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.876546 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.876554 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.876570 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.876581 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: E1212 00:07:37.894897 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.899434 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.899491 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.899501 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.899519 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.899528 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: E1212 00:07:37.916990 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.921796 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.921846 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.921856 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.921873 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.921918 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: E1212 00:07:37.936515 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.940211 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.940259 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.940270 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.940288 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.940303 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: E1212 00:07:37.953406 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.957603 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.957673 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.957688 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.957704 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.957717 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:37 crc kubenswrapper[4917]: E1212 00:07:37.971130 4917 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T00:07:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"153f6872-46ff-42ea-b410-996e545902e8\\\",\\\"systemUUID\\\":\\\"3860a222-2102-46c2-9063-9861157893b4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:37Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:37 crc kubenswrapper[4917]: E1212 00:07:37.971266 4917 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.972969 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.973009 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.973020 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.973038 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:37 crc kubenswrapper[4917]: I1212 00:07:37.973051 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:37Z","lastTransitionTime":"2025-12-12T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.075280 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.075337 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.075347 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.075368 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.075379 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:38Z","lastTransitionTime":"2025-12-12T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.178752 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.178798 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.178807 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.178821 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.178831 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:38Z","lastTransitionTime":"2025-12-12T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.282217 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.282283 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.282296 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.282318 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.282334 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:38Z","lastTransitionTime":"2025-12-12T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.385085 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.385127 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.385138 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.385155 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.385170 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:38Z","lastTransitionTime":"2025-12-12T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.488198 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.488263 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.488281 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.488306 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.488324 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:38Z","lastTransitionTime":"2025-12-12T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.591201 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.591237 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.591247 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.591260 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.591270 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:38Z","lastTransitionTime":"2025-12-12T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.603767 4917 scope.go:117] "RemoveContainer" containerID="25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15" Dec 12 00:07:38 crc kubenswrapper[4917]: E1212 00:07:38.604124 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-26hjd_openshift-ovn-kubernetes(c740630c-23cb-4c02-ab4e-bac3d773dce4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.618377 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bddbc3a-d8cc-4766-80d3-92562e840be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa961858837ad7982ce3acfc3b0ef5cf48126b8aba44a2b58462744eea1c91b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzhcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ktvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.631461 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a97d2a-f733-4608-819e-a5c10747433b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://029347d909fd9b3552ad0f4b373a10240dab46e2d6acf9bf988f2f2b954993f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9367706cf7f265e6151e198ba075d608d52a42d17a97d4cae35e37a050155d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2zzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wm9sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.657308 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c38bce33-2b8a-4640-a64a-a87485fa52e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8447e16d8e6aa7cd7f49dcdb9139cf311ade5aac1561dd894f1040cc0d352ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02716ba394064a78d2f67241000e581a3b57a77d349ebe84c178d56763575595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee374fe9b945813c4b4c4d0c4520666a3143d70cb53f579b58ae306ea56587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a56ed1ba84a34f4c73c146f16b267782610e835847596d4a4795cf0ed3144001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c58f5a302aac53beaefd1b6e85f1ab6bd511fa08841118a9b55149065aee47aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0bfd3a18c72a0be214ef2f11fe764bd88d87e6f65d6a021c6865727e614d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0bfd3a18c72a0be214ef2f11fe764bd88d87e6f65d6a021c6865727e614d12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea6d1f2a58a7452e54ed0ef0b4409a20d4b3fe44fe176b0a8d52602a2f972be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ea6d1f2a58a7452e54ed0ef0b4409a20d4b3fe44fe176b0a8d52602a2f972be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://debe388b959f90a964de31373a32ce4a671b7d8e7ebc061cf9de7723e1856ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://debe388b959f90a964de31373a32ce4a671b7d8e7ebc061cf9de7723e1856ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T00:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T00:06:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.670435 4917 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T00:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T00:07:38Z is after 2025-08-24T17:21:41Z" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.694838 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.694887 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.694903 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.694925 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.694941 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:38Z","lastTransitionTime":"2025-12-12T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.742195 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hmhzk" podStartSLOduration=69.742147068 podStartE2EDuration="1m9.742147068s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:38.739214755 +0000 UTC m=+93.517015648" watchObservedRunningTime="2025-12-12 00:07:38.742147068 +0000 UTC m=+93.519947921" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.787816 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-24mnq" podStartSLOduration=69.787788926 podStartE2EDuration="1m9.787788926s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:38.765182162 +0000 UTC m=+93.542983015" watchObservedRunningTime="2025-12-12 00:07:38.787788926 +0000 UTC m=+93.565589749" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.797464 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.797511 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.797524 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.797542 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.797554 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:38Z","lastTransitionTime":"2025-12-12T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.803258 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qkh7m" podStartSLOduration=69.803238504 podStartE2EDuration="1m9.803238504s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:38.787753575 +0000 UTC m=+93.565554388" watchObservedRunningTime="2025-12-12 00:07:38.803238504 +0000 UTC m=+93.581039317" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.815129 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5tpmh" podStartSLOduration=69.815115876 podStartE2EDuration="1m9.815115876s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:38.803764117 +0000 UTC m=+93.581564970" watchObservedRunningTime="2025-12-12 00:07:38.815115876 +0000 UTC m=+93.592916689" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.835379 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=39.835359801 podStartE2EDuration="39.835359801s" podCreationTimestamp="2025-12-12 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:38.834734756 +0000 UTC m=+93.612535569" watchObservedRunningTime="2025-12-12 00:07:38.835359801 +0000 UTC m=+93.613160624" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.835759 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=9.83575423 podStartE2EDuration="9.83575423s" podCreationTimestamp="2025-12-12 00:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:38.816323125 +0000 UTC m=+93.594123938" watchObservedRunningTime="2025-12-12 00:07:38.83575423 +0000 UTC m=+93.613555043" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.893090 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=75.893069775 podStartE2EDuration="1m15.893069775s" podCreationTimestamp="2025-12-12 00:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:38.892996663 +0000 UTC m=+93.670797476" watchObservedRunningTime="2025-12-12 00:07:38.893069775 +0000 UTC m=+93.670870588" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.900008 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.900054 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.900063 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.900079 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.900089 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:38Z","lastTransitionTime":"2025-12-12T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:38 crc kubenswrapper[4917]: I1212 00:07:38.934976 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.934959151 podStartE2EDuration="1m12.934959151s" podCreationTimestamp="2025-12-12 00:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:38.932878321 +0000 UTC m=+93.710679144" watchObservedRunningTime="2025-12-12 00:07:38.934959151 +0000 UTC m=+93.712759964" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.002152 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.002199 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.002207 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.002223 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.002234 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:39Z","lastTransitionTime":"2025-12-12T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.105247 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.105294 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.105306 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.105325 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.105338 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:39Z","lastTransitionTime":"2025-12-12T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.207884 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.207955 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.207977 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.208009 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.208033 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:39Z","lastTransitionTime":"2025-12-12T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.310488 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.310542 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.310556 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.310576 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.310592 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:39Z","lastTransitionTime":"2025-12-12T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.412384 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.412455 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.412468 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.412486 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.412498 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:39Z","lastTransitionTime":"2025-12-12T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.515535 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.515597 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.515615 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.515638 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.515682 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:39Z","lastTransitionTime":"2025-12-12T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.601198 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.601292 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:39 crc kubenswrapper[4917]: E1212 00:07:39.601335 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.601347 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.601198 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:39 crc kubenswrapper[4917]: E1212 00:07:39.601426 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:39 crc kubenswrapper[4917]: E1212 00:07:39.601534 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:39 crc kubenswrapper[4917]: E1212 00:07:39.601624 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.617718 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.617766 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.617785 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.617801 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.617812 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:39Z","lastTransitionTime":"2025-12-12T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.720015 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.720055 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.720082 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.720096 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.720105 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:39Z","lastTransitionTime":"2025-12-12T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.822394 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.822464 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.822482 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.822506 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.822524 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:39Z","lastTransitionTime":"2025-12-12T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.925427 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.925477 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.925503 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.925522 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:39 crc kubenswrapper[4917]: I1212 00:07:39.925536 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:39Z","lastTransitionTime":"2025-12-12T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.028837 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.028914 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.028932 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.028956 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.028972 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:40Z","lastTransitionTime":"2025-12-12T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.131223 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.131278 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.131290 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.131315 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.131328 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:40Z","lastTransitionTime":"2025-12-12T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.235262 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.235351 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.235378 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.235407 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.235431 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:40Z","lastTransitionTime":"2025-12-12T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.339232 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.339334 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.339351 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.339374 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.339392 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:40Z","lastTransitionTime":"2025-12-12T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.442075 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.442122 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.442135 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.442152 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.442163 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:40Z","lastTransitionTime":"2025-12-12T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.544958 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.545007 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.545021 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.545063 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.545077 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:40Z","lastTransitionTime":"2025-12-12T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.648404 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.649024 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.649041 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.649087 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.649101 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:40Z","lastTransitionTime":"2025-12-12T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.751992 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.752040 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.752051 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.752066 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.752075 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:40Z","lastTransitionTime":"2025-12-12T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.854889 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.854928 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.854937 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.854950 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.854960 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:40Z","lastTransitionTime":"2025-12-12T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.958314 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.958374 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.958397 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.958428 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:40 crc kubenswrapper[4917]: I1212 00:07:40.958451 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:40Z","lastTransitionTime":"2025-12-12T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.060752 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.060802 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.060817 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.060835 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.060845 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:41Z","lastTransitionTime":"2025-12-12T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.163561 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.163783 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.163797 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.163811 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.163820 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:41Z","lastTransitionTime":"2025-12-12T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.266636 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.266700 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.266712 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.266729 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.266741 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:41Z","lastTransitionTime":"2025-12-12T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.369464 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.369502 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.369510 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.369524 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.369533 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:41Z","lastTransitionTime":"2025-12-12T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.471703 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.471752 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.471766 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.471784 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.471804 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:41Z","lastTransitionTime":"2025-12-12T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.574254 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.574293 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.574302 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.574317 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.574328 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:41Z","lastTransitionTime":"2025-12-12T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.601922 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.601990 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.602003 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:41 crc kubenswrapper[4917]: E1212 00:07:41.602614 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:41 crc kubenswrapper[4917]: E1212 00:07:41.602718 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:41 crc kubenswrapper[4917]: E1212 00:07:41.602776 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.602845 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:41 crc kubenswrapper[4917]: E1212 00:07:41.603102 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.676409 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.676451 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.676465 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.676481 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.676494 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:41Z","lastTransitionTime":"2025-12-12T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.778926 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.778970 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.778981 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.778999 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.779010 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:41Z","lastTransitionTime":"2025-12-12T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.881337 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.881675 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.881862 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.882007 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.882130 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:41Z","lastTransitionTime":"2025-12-12T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.985029 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.985349 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.985440 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.985530 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:41 crc kubenswrapper[4917]: I1212 00:07:41.985624 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:41Z","lastTransitionTime":"2025-12-12T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.088713 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.088760 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.088769 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.088789 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.088801 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:42Z","lastTransitionTime":"2025-12-12T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.191543 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.191622 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.191639 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.191710 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.191729 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:42Z","lastTransitionTime":"2025-12-12T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.294689 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.294986 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.295086 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.295157 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.295221 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:42Z","lastTransitionTime":"2025-12-12T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.397414 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.397454 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.397465 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.397480 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.397490 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:42Z","lastTransitionTime":"2025-12-12T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.500292 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.500346 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.500358 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.500376 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.500386 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:42Z","lastTransitionTime":"2025-12-12T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.603150 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.603195 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.603206 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.603223 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.603235 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:42Z","lastTransitionTime":"2025-12-12T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.705630 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.705710 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.705729 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.705751 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.705766 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:42Z","lastTransitionTime":"2025-12-12T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.809137 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.809186 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.809195 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.809212 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.809222 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:42Z","lastTransitionTime":"2025-12-12T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.912956 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.913014 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.913028 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.913047 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:42 crc kubenswrapper[4917]: I1212 00:07:42.913060 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:42Z","lastTransitionTime":"2025-12-12T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.015555 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.015616 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.015632 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.015672 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.015688 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:43Z","lastTransitionTime":"2025-12-12T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.120205 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.120285 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.120302 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.120332 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.120361 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:43Z","lastTransitionTime":"2025-12-12T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.223125 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.223165 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.223177 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.223194 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.223208 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:43Z","lastTransitionTime":"2025-12-12T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.326808 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.326869 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.326889 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.326913 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.326932 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:43Z","lastTransitionTime":"2025-12-12T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.430399 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.430466 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.430487 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.430520 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.430539 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:43Z","lastTransitionTime":"2025-12-12T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.533419 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.533466 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.533476 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.533494 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.533505 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:43Z","lastTransitionTime":"2025-12-12T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.601076 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.601128 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.601091 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.601091 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:43 crc kubenswrapper[4917]: E1212 00:07:43.601327 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:43 crc kubenswrapper[4917]: E1212 00:07:43.601455 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:43 crc kubenswrapper[4917]: E1212 00:07:43.601549 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:43 crc kubenswrapper[4917]: E1212 00:07:43.601625 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.636481 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.636527 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.636539 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.636555 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.636565 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:43Z","lastTransitionTime":"2025-12-12T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.739549 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.739581 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.739592 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.739609 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.739622 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:43Z","lastTransitionTime":"2025-12-12T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.843463 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.843556 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.843571 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.843593 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.843608 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:43Z","lastTransitionTime":"2025-12-12T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.946081 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.946145 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.946157 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.946173 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:43 crc kubenswrapper[4917]: I1212 00:07:43.946184 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:43Z","lastTransitionTime":"2025-12-12T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.048724 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.048774 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.048788 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.048806 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.048818 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:44Z","lastTransitionTime":"2025-12-12T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.152259 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.152323 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.152339 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.152363 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.152377 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:44Z","lastTransitionTime":"2025-12-12T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.255512 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.255560 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.255573 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.255591 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.255603 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:44Z","lastTransitionTime":"2025-12-12T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.357552 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.357614 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.357628 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.357677 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.357690 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:44Z","lastTransitionTime":"2025-12-12T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.460052 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.460110 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.460128 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.460153 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.460171 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:44Z","lastTransitionTime":"2025-12-12T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.563597 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.563668 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.563682 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.563702 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.563714 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:44Z","lastTransitionTime":"2025-12-12T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.666085 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.666161 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.666179 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.666267 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.666286 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:44Z","lastTransitionTime":"2025-12-12T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.769184 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.769234 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.769245 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.769289 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.769303 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:44Z","lastTransitionTime":"2025-12-12T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.872968 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.873020 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.873034 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.873052 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.873064 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:44Z","lastTransitionTime":"2025-12-12T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.975914 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.975952 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.975967 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.975986 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:44 crc kubenswrapper[4917]: I1212 00:07:44.976001 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:44Z","lastTransitionTime":"2025-12-12T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.079041 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.079088 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.079100 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.079119 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.079133 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:45Z","lastTransitionTime":"2025-12-12T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.188244 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.188303 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.188315 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.188335 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.188347 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:45Z","lastTransitionTime":"2025-12-12T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.290775 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.290842 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.290858 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.290884 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.290901 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:45Z","lastTransitionTime":"2025-12-12T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.394558 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.394727 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.394758 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.394789 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.394812 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:45Z","lastTransitionTime":"2025-12-12T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.498039 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.498085 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.498096 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.498111 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.498121 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:45Z","lastTransitionTime":"2025-12-12T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.600857 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.600993 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.601017 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.601025 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.601040 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.601050 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:45Z","lastTransitionTime":"2025-12-12T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.601907 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.602136 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.602147 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:45 crc kubenswrapper[4917]: E1212 00:07:45.602193 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:45 crc kubenswrapper[4917]: E1212 00:07:45.602238 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:45 crc kubenswrapper[4917]: E1212 00:07:45.602247 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:45 crc kubenswrapper[4917]: E1212 00:07:45.602331 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.650005 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podStartSLOduration=76.649981826 podStartE2EDuration="1m16.649981826s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:45.634809554 +0000 UTC m=+100.412610377" watchObservedRunningTime="2025-12-12 00:07:45.649981826 +0000 UTC m=+100.427782639" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.650345 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wm9sp" podStartSLOduration=76.650339245 podStartE2EDuration="1m16.650339245s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:45.647553027 +0000 UTC m=+100.425353930" watchObservedRunningTime="2025-12-12 00:07:45.650339245 +0000 UTC m=+100.428140078" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.682866 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=13.682844562 podStartE2EDuration="13.682844562s" podCreationTimestamp="2025-12-12 00:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:45.681852287 +0000 UTC m=+100.459653120" watchObservedRunningTime="2025-12-12 00:07:45.682844562 +0000 UTC m=+100.460645385" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.702374 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.702426 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.702438 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.702456 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.702467 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:45Z","lastTransitionTime":"2025-12-12T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.804556 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.804616 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.804632 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.804693 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.804711 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:45Z","lastTransitionTime":"2025-12-12T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.907639 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.907728 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.907743 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.907765 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:45 crc kubenswrapper[4917]: I1212 00:07:45.907781 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:45Z","lastTransitionTime":"2025-12-12T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.010805 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.010903 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.010919 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.010940 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.010956 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:46Z","lastTransitionTime":"2025-12-12T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.112800 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.112855 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.112868 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.112894 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.112913 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:46Z","lastTransitionTime":"2025-12-12T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.214890 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.214956 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.214973 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.214996 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.215016 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:46Z","lastTransitionTime":"2025-12-12T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.318441 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.318507 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.318537 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.318563 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.318580 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:46Z","lastTransitionTime":"2025-12-12T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.421222 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.421262 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.421271 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.421285 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.421295 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:46Z","lastTransitionTime":"2025-12-12T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.524495 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.524536 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.524545 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.524561 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.524573 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:46Z","lastTransitionTime":"2025-12-12T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.627223 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.627262 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.627272 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.627285 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.627295 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:46Z","lastTransitionTime":"2025-12-12T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.730101 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.730154 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.730164 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.730179 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.730189 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:46Z","lastTransitionTime":"2025-12-12T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.833521 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.833580 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.833590 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.833611 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.833630 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:46Z","lastTransitionTime":"2025-12-12T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.935873 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.935909 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.935920 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.935936 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:46 crc kubenswrapper[4917]: I1212 00:07:46.935950 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:46Z","lastTransitionTime":"2025-12-12T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.039368 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.039431 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.039443 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.039466 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.039477 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:47Z","lastTransitionTime":"2025-12-12T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.142982 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.143041 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.143069 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.143094 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.143296 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:47Z","lastTransitionTime":"2025-12-12T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.245931 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.246017 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.246053 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.246083 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.246106 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:47Z","lastTransitionTime":"2025-12-12T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.349212 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.349257 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.349269 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.349285 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.349296 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:47Z","lastTransitionTime":"2025-12-12T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.452738 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.452941 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.452984 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.453017 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.453057 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:47Z","lastTransitionTime":"2025-12-12T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.556345 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.556434 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.556455 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.556493 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.556533 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:47Z","lastTransitionTime":"2025-12-12T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.601090 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.601160 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.601121 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.601107 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:47 crc kubenswrapper[4917]: E1212 00:07:47.601287 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:47 crc kubenswrapper[4917]: E1212 00:07:47.601392 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:47 crc kubenswrapper[4917]: E1212 00:07:47.601542 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:47 crc kubenswrapper[4917]: E1212 00:07:47.601635 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.659736 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.659785 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.659797 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.659816 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.659832 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:47Z","lastTransitionTime":"2025-12-12T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.762585 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.762621 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.762632 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.762668 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.762681 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:47Z","lastTransitionTime":"2025-12-12T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.865791 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.865858 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.865880 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.865910 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.865936 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:47Z","lastTransitionTime":"2025-12-12T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.969011 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.969091 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.969118 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.969148 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.969172 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:47Z","lastTransitionTime":"2025-12-12T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:47 crc kubenswrapper[4917]: I1212 00:07:47.979490 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:47 crc kubenswrapper[4917]: E1212 00:07:47.979685 4917 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:07:47 crc kubenswrapper[4917]: E1212 00:07:47.979750 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs podName:58f4853b-9736-4a03-8c86-1627cb51acbe nodeName:}" failed. No retries permitted until 2025-12-12 00:08:51.979736468 +0000 UTC m=+166.757537281 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs") pod "network-metrics-daemon-f4t96" (UID: "58f4853b-9736-4a03-8c86-1627cb51acbe") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.071281 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.071336 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.071357 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.071380 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.071395 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:48Z","lastTransitionTime":"2025-12-12T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.173374 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.173729 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.173832 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.173958 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.174055 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:48Z","lastTransitionTime":"2025-12-12T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.187042 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.187090 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.187100 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.187118 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.187129 4917 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T00:07:48Z","lastTransitionTime":"2025-12-12T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.235214 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht"] Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.235673 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.237786 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.238102 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.238284 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.238678 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.383167 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.383217 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.383247 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.383286 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.383307 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.483854 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.483896 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.483926 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.483942 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.484091 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.484083 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.484873 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.486094 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.497838 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.506759 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7gxht\" (UID: \"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:48 crc kubenswrapper[4917]: I1212 00:07:48.550728 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" Dec 12 00:07:49 crc kubenswrapper[4917]: I1212 00:07:49.168412 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" event={"ID":"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e","Type":"ContainerStarted","Data":"5b756f1bd8785195297a0d0751a0c8a4cc45b16aa8d02faee43a50a253409570"} Dec 12 00:07:49 crc kubenswrapper[4917]: I1212 00:07:49.168837 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" event={"ID":"91f3dc52-6a94-48bf-85f8-f59c2d5f2a3e","Type":"ContainerStarted","Data":"26772d1e97cd5714b8ca6e11596b72d6ee7734ca4888c0b4dbe98bae859a07c8"} Dec 12 00:07:49 crc kubenswrapper[4917]: I1212 00:07:49.601475 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:49 crc kubenswrapper[4917]: E1212 00:07:49.601693 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:49 crc kubenswrapper[4917]: I1212 00:07:49.601969 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:49 crc kubenswrapper[4917]: I1212 00:07:49.602081 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:49 crc kubenswrapper[4917]: E1212 00:07:49.602216 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:49 crc kubenswrapper[4917]: E1212 00:07:49.602235 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:49 crc kubenswrapper[4917]: I1212 00:07:49.602479 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:49 crc kubenswrapper[4917]: E1212 00:07:49.602638 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:51 crc kubenswrapper[4917]: I1212 00:07:51.601184 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:51 crc kubenswrapper[4917]: I1212 00:07:51.601433 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:51 crc kubenswrapper[4917]: E1212 00:07:51.602167 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:51 crc kubenswrapper[4917]: I1212 00:07:51.601519 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:51 crc kubenswrapper[4917]: E1212 00:07:51.602423 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:51 crc kubenswrapper[4917]: I1212 00:07:51.601484 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:51 crc kubenswrapper[4917]: E1212 00:07:51.602624 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:51 crc kubenswrapper[4917]: E1212 00:07:51.602259 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:53 crc kubenswrapper[4917]: I1212 00:07:53.601556 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:53 crc kubenswrapper[4917]: I1212 00:07:53.601556 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:53 crc kubenswrapper[4917]: I1212 00:07:53.601582 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:53 crc kubenswrapper[4917]: E1212 00:07:53.601835 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:53 crc kubenswrapper[4917]: E1212 00:07:53.602322 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:53 crc kubenswrapper[4917]: E1212 00:07:53.602393 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:53 crc kubenswrapper[4917]: I1212 00:07:53.602522 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:53 crc kubenswrapper[4917]: E1212 00:07:53.602738 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:53 crc kubenswrapper[4917]: I1212 00:07:53.602785 4917 scope.go:117] "RemoveContainer" containerID="25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15" Dec 12 00:07:54 crc kubenswrapper[4917]: I1212 00:07:54.189058 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/2.log" Dec 12 00:07:54 crc kubenswrapper[4917]: I1212 00:07:54.192931 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerStarted","Data":"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52"} Dec 12 00:07:54 crc kubenswrapper[4917]: I1212 00:07:54.193481 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:07:54 crc kubenswrapper[4917]: I1212 00:07:54.225107 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7gxht" podStartSLOduration=85.225005214 podStartE2EDuration="1m25.225005214s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:49.182835195 +0000 UTC m=+103.960636008" watchObservedRunningTime="2025-12-12 00:07:54.225005214 +0000 UTC m=+109.002806087" Dec 12 00:07:54 crc kubenswrapper[4917]: I1212 00:07:54.226205 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podStartSLOduration=85.226190883 podStartE2EDuration="1m25.226190883s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:07:54.220806912 +0000 UTC m=+108.998607745" watchObservedRunningTime="2025-12-12 00:07:54.226190883 +0000 UTC m=+109.003991766" Dec 12 00:07:54 crc kubenswrapper[4917]: I1212 00:07:54.831147 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-f4t96"] Dec 12 00:07:54 crc kubenswrapper[4917]: I1212 00:07:54.831275 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:54 crc kubenswrapper[4917]: E1212 00:07:54.831374 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:55 crc kubenswrapper[4917]: I1212 00:07:55.601504 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:55 crc kubenswrapper[4917]: I1212 00:07:55.601503 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:55 crc kubenswrapper[4917]: I1212 00:07:55.601531 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:55 crc kubenswrapper[4917]: E1212 00:07:55.602334 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 00:07:55 crc kubenswrapper[4917]: E1212 00:07:55.602471 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 00:07:55 crc kubenswrapper[4917]: E1212 00:07:55.602618 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 00:07:56 crc kubenswrapper[4917]: I1212 00:07:56.601241 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:56 crc kubenswrapper[4917]: E1212 00:07:56.601423 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f4t96" podUID="58f4853b-9736-4a03-8c86-1627cb51acbe" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.023386 4917 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.023562 4917 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.070830 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lss4q"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.078063 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h6z88"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.078303 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.078572 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.078674 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.080232 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.089013 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.089132 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.089185 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.089303 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.089331 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.089358 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.089427 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.089450 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.089513 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.089576 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.090256 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.090267 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.090302 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.090675 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.090725 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.096167 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.096685 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.097285 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.097382 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.097595 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.098194 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.098739 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.099736 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.099768 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5dp8w"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.099900 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.100160 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.100716 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.102523 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29424960-ttvzz"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.103089 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29424960-ttvzz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.103327 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dzpcq"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.108445 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.125994 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.126669 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.126958 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w7hp2"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.127247 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.127431 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.127551 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.127863 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.128079 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.128266 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jctrq"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.128541 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.128638 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jctrq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.128880 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129092 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129252 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129285 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129408 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129415 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129095 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129713 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129748 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129848 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129870 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129662 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.129997 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.130098 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.130124 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.134242 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.134610 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.134931 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.135490 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-742lr"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.136100 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.136183 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8frb7"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.136787 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.136972 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.137272 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.138041 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.138262 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.138459 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.138747 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.138951 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.139367 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.139531 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.139563 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.139673 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.139721 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.139933 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.139941 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.140261 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.139530 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.140570 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.140712 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.144311 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.146461 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.150554 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-84jtz"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.151297 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.153097 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.155268 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pzbrh"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.155813 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.156006 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.155276 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.156546 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pzbrh" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.160363 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.161041 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-th48g"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.176618 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.179457 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fhrmw"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.191730 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.192170 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.192626 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.192818 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.192852 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193018 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193037 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193038 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193174 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193193 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193275 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193315 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193389 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193409 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193512 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193531 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193579 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193729 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193798 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193837 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.193848 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.192678 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194060 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194188 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194327 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194595 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194788 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194869 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/234c3156-bf4c-464d-8ee4-957474f3bb82-console-oauth-config\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194904 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-oauth-serving-cert\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194927 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe29ce3-49c3-466c-8e10-02d57dba74fa-trusted-ca\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194950 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-config\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194968 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbe29ce3-49c3-466c-8e10-02d57dba74fa-serving-cert\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194984 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/234c3156-bf4c-464d-8ee4-957474f3bb82-console-serving-cert\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195002 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8edfd21-19a8-499c-b67b-93883083c239-config\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195018 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2-metrics-tls\") pod \"dns-operator-744455d44c-w7hp2\" (UID: \"e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195042 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scwhf\" (UniqueName: \"kubernetes.io/projected/234c3156-bf4c-464d-8ee4-957474f3bb82-kube-api-access-scwhf\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195059 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-config\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195073 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195097 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bq5kv\" (UID: \"818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195113 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b06b8a36-c12a-4604-a017-277d9a6a18ff-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195129 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-serving-cert\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195163 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdsjz\" (UniqueName: \"kubernetes.io/projected/e8edfd21-19a8-499c-b67b-93883083c239-kube-api-access-qdsjz\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195177 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195214 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8b3bec2-0c51-46d8-9b79-22f802b58962-encryption-config\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195236 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-config\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195256 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-service-ca\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195276 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-image-import-ca\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195300 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95hdn\" (UniqueName: \"kubernetes.io/projected/c31d8f10-5195-45b7-9809-19edb34d404b-kube-api-access-95hdn\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195325 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6e70d859-42e8-4d15-85be-8456028abbc5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6m2j6\" (UID: \"6e70d859-42e8-4d15-85be-8456028abbc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195348 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b3bec2-0c51-46d8-9b79-22f802b58962-serving-cert\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195373 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e8edfd21-19a8-499c-b67b-93883083c239-machine-approver-tls\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195395 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-client-ca\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195433 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-trusted-ca-bundle\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195470 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dr4f\" (UniqueName: \"kubernetes.io/projected/a8b3bec2-0c51-46d8-9b79-22f802b58962-kube-api-access-9dr4f\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195514 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8edfd21-19a8-499c-b67b-93883083c239-auth-proxy-config\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195546 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqv95\" (UniqueName: \"kubernetes.io/projected/dbe29ce3-49c3-466c-8e10-02d57dba74fa-kube-api-access-qqv95\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195568 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8b3bec2-0c51-46d8-9b79-22f802b58962-audit-dir\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195606 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-service-ca-bundle\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195632 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8b3bec2-0c51-46d8-9b79-22f802b58962-etcd-client\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195683 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcplc\" (UniqueName: \"kubernetes.io/projected/818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5-kube-api-access-mcplc\") pod \"cluster-samples-operator-665b6dd947-bq5kv\" (UID: \"818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195705 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvj5l\" (UniqueName: \"kubernetes.io/projected/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-kube-api-access-jvj5l\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195729 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b06b8a36-c12a-4604-a017-277d9a6a18ff-images\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195755 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c31d8f10-5195-45b7-9809-19edb34d404b-serving-cert\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195777 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a8b3bec2-0c51-46d8-9b79-22f802b58962-node-pullsecrets\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195801 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-audit\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195826 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195854 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe29ce3-49c3-466c-8e10-02d57dba74fa-config\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195877 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-etcd-serving-ca\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195904 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-console-config\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195936 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvgw2\" (UniqueName: \"kubernetes.io/projected/e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2-kube-api-access-dvgw2\") pod \"dns-operator-744455d44c-w7hp2\" (UID: \"e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195955 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e70d859-42e8-4d15-85be-8456028abbc5-serving-cert\") pod \"openshift-config-operator-7777fb866f-6m2j6\" (UID: \"6e70d859-42e8-4d15-85be-8456028abbc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195976 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06b8a36-c12a-4604-a017-277d9a6a18ff-config\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.196013 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr65g\" (UniqueName: \"kubernetes.io/projected/b06b8a36-c12a-4604-a017-277d9a6a18ff-kube-api-access-dr65g\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.196038 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bcff\" (UniqueName: \"kubernetes.io/projected/6e70d859-42e8-4d15-85be-8456028abbc5-kube-api-access-9bcff\") pod \"openshift-config-operator-7777fb866f-6m2j6\" (UID: \"6e70d859-42e8-4d15-85be-8456028abbc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195634 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.194810 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.196749 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195264 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195460 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195742 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.196943 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195786 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195818 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.195861 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.196510 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.196617 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.197653 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.197851 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.199800 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.200295 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.202139 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.202341 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.202980 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.206133 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.207603 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.210321 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.210439 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.210439 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h6z88"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.215112 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8st8c"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.215748 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.215794 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t59j6"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.216439 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.217143 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.217526 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.222675 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nplkn"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.223475 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.226459 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.227124 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.232736 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.234338 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.235584 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.242984 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vtz2p"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.243883 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.245531 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.246164 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.246320 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.248057 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.248167 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.251552 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.252915 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.253851 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.255447 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-thf67"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.256278 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.266290 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.269386 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.275351 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.275981 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.278520 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.278733 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.282699 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.282747 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.283368 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.283929 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.284034 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.284136 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.285987 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.289270 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29424960-ttvzz"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.289400 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.290882 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8frb7"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.295573 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.297521 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8edfd21-19a8-499c-b67b-93883083c239-config\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.297698 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-config\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.297777 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbe29ce3-49c3-466c-8e10-02d57dba74fa-serving-cert\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.297853 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/234c3156-bf4c-464d-8ee4-957474f3bb82-console-serving-cert\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.297996 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe29aecd-7402-47c3-a15a-d5a489c48b29-serviceca\") pod \"image-pruner-29424960-ttvzz\" (UID: \"fe29aecd-7402-47c3-a15a-d5a489c48b29\") " pod="openshift-image-registry/image-pruner-29424960-ttvzz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298065 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2-metrics-tls\") pod \"dns-operator-744455d44c-w7hp2\" (UID: \"e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298131 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scwhf\" (UniqueName: \"kubernetes.io/projected/234c3156-bf4c-464d-8ee4-957474f3bb82-kube-api-access-scwhf\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298200 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-config\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298273 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298347 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-config\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298421 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298491 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63b111b7-6fe4-4dba-9c9e-eec6186f2ba2-config-volume\") pod \"dns-default-pzbrh\" (UID: \"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2\") " pod="openshift-dns/dns-default-pzbrh" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298556 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b029a79-9426-4acc-af09-c11c8216777c-serving-cert\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298656 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bq5kv\" (UID: \"818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298735 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b06b8a36-c12a-4604-a017-277d9a6a18ff-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298803 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-serving-cert\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298873 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/14f90559-e69f-445c-a61a-2b0f881abb72-etcd-client\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298948 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.299024 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdsjz\" (UniqueName: \"kubernetes.io/projected/e8edfd21-19a8-499c-b67b-93883083c239-kube-api-access-qdsjz\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.299095 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.299514 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14f90559-e69f-445c-a61a-2b0f881abb72-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.298235 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-742lr"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.299710 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w7hp2"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.300040 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8848f4aa-3570-40a9-bd53-6e22cc7ed795-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qdfvr\" (UID: \"8848f4aa-3570-40a9-bd53-6e22cc7ed795\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.300077 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh4l7\" (UniqueName: \"kubernetes.io/projected/8848f4aa-3570-40a9-bd53-6e22cc7ed795-kube-api-access-sh4l7\") pod \"openshift-apiserver-operator-796bbdcf4f-qdfvr\" (UID: \"8848f4aa-3570-40a9-bd53-6e22cc7ed795\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.300101 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/14f90559-e69f-445c-a61a-2b0f881abb72-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.300123 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.300151 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-config\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.300176 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-service-ca\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.300199 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-image-import-ca\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.300221 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8b3bec2-0c51-46d8-9b79-22f802b58962-encryption-config\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.301087 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.301122 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-config\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.301223 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-config\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.301960 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.302355 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-image-import-ca\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.302517 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-service-ca\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.302576 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95hdn\" (UniqueName: \"kubernetes.io/projected/c31d8f10-5195-45b7-9809-19edb34d404b-kube-api-access-95hdn\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.302805 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.303421 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-config\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.303730 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8edfd21-19a8-499c-b67b-93883083c239-config\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.304012 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5dp8w"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.305331 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/234c3156-bf4c-464d-8ee4-957474f3bb82-console-serving-cert\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.305343 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2-metrics-tls\") pod \"dns-operator-744455d44c-w7hp2\" (UID: \"e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.305507 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lss4q"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.305562 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e8edfd21-19a8-499c-b67b-93883083c239-machine-approver-tls\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.305803 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6e70d859-42e8-4d15-85be-8456028abbc5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6m2j6\" (UID: \"6e70d859-42e8-4d15-85be-8456028abbc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.305904 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b3bec2-0c51-46d8-9b79-22f802b58962-serving-cert\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.306269 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6e70d859-42e8-4d15-85be-8456028abbc5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6m2j6\" (UID: \"6e70d859-42e8-4d15-85be-8456028abbc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.306052 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8b3bec2-0c51-46d8-9b79-22f802b58962-encryption-config\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.305990 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbe29ce3-49c3-466c-8e10-02d57dba74fa-serving-cert\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.306894 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bq5kv\" (UID: \"818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.307022 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c845551a-a148-4bde-9808-f2d8d09c7616-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.307148 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-client-ca\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.307251 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.307363 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.307472 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmqsv\" (UniqueName: \"kubernetes.io/projected/f3af21ad-4bfb-4640-9589-c46313fb2379-kube-api-access-lmqsv\") pod \"downloads-7954f5f757-jctrq\" (UID: \"f3af21ad-4bfb-4640-9589-c46313fb2379\") " pod="openshift-console/downloads-7954f5f757-jctrq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.307572 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-policies\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.307678 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.307789 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8edfd21-19a8-499c-b67b-93883083c239-auth-proxy-config\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.307973 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-trusted-ca-bundle\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308087 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dr4f\" (UniqueName: \"kubernetes.io/projected/a8b3bec2-0c51-46d8-9b79-22f802b58962-kube-api-access-9dr4f\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308274 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fa6201-10ae-43a7-95d2-190b604c2594-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-97xnz\" (UID: \"13fa6201-10ae-43a7-95d2-190b604c2594\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308400 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8edfd21-19a8-499c-b67b-93883083c239-auth-proxy-config\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.307824 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b06b8a36-c12a-4604-a017-277d9a6a18ff-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308515 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqv95\" (UniqueName: \"kubernetes.io/projected/dbe29ce3-49c3-466c-8e10-02d57dba74fa-kube-api-access-qqv95\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308573 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e8edfd21-19a8-499c-b67b-93883083c239-machine-approver-tls\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308588 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8b3bec2-0c51-46d8-9b79-22f802b58962-audit-dir\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308689 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-service-ca-bundle\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308723 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl8hh\" (UniqueName: \"kubernetes.io/projected/63b111b7-6fe4-4dba-9c9e-eec6186f2ba2-kube-api-access-pl8hh\") pod \"dns-default-pzbrh\" (UID: \"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2\") " pod="openshift-dns/dns-default-pzbrh" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308166 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-client-ca\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308751 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8b3bec2-0c51-46d8-9b79-22f802b58962-etcd-client\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308779 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdcvl\" (UniqueName: \"kubernetes.io/projected/ae4dff24-ae34-4029-a0a1-30e9a379f091-kube-api-access-jdcvl\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308804 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgpfl\" (UniqueName: \"kubernetes.io/projected/13fa6201-10ae-43a7-95d2-190b604c2594-kube-api-access-lgpfl\") pod \"openshift-controller-manager-operator-756b6f6bc6-97xnz\" (UID: \"13fa6201-10ae-43a7-95d2-190b604c2594\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308832 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-857sk\" (UniqueName: \"kubernetes.io/projected/fe29aecd-7402-47c3-a15a-d5a489c48b29-kube-api-access-857sk\") pod \"image-pruner-29424960-ttvzz\" (UID: \"fe29aecd-7402-47c3-a15a-d5a489c48b29\") " pod="openshift-image-registry/image-pruner-29424960-ttvzz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308853 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13fa6201-10ae-43a7-95d2-190b604c2594-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-97xnz\" (UID: \"13fa6201-10ae-43a7-95d2-190b604c2594\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308881 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcplc\" (UniqueName: \"kubernetes.io/projected/818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5-kube-api-access-mcplc\") pod \"cluster-samples-operator-665b6dd947-bq5kv\" (UID: \"818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308904 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvj5l\" (UniqueName: \"kubernetes.io/projected/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-kube-api-access-jvj5l\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308931 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f90559-e69f-445c-a61a-2b0f881abb72-serving-cert\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308958 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b06b8a36-c12a-4604-a017-277d9a6a18ff-images\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308980 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8848f4aa-3570-40a9-bd53-6e22cc7ed795-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qdfvr\" (UID: \"8848f4aa-3570-40a9-bd53-6e22cc7ed795\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309001 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64rr6\" (UniqueName: \"kubernetes.io/projected/14f90559-e69f-445c-a61a-2b0f881abb72-kube-api-access-64rr6\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309026 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309054 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c31d8f10-5195-45b7-9809-19edb34d404b-serving-cert\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309074 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a8b3bec2-0c51-46d8-9b79-22f802b58962-node-pullsecrets\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309095 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309119 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-audit\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309142 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/14f90559-e69f-445c-a61a-2b0f881abb72-encryption-config\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309171 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309178 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-service-ca-bundle\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309193 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63b111b7-6fe4-4dba-9c9e-eec6186f2ba2-metrics-tls\") pod \"dns-default-pzbrh\" (UID: \"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2\") " pod="openshift-dns/dns-default-pzbrh" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309217 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c845551a-a148-4bde-9808-f2d8d09c7616-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309253 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe29ce3-49c3-466c-8e10-02d57dba74fa-config\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309275 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-client-ca\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309297 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-dir\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309324 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-etcd-serving-ca\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309345 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309411 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-console-config\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309433 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/14f90559-e69f-445c-a61a-2b0f881abb72-audit-policies\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309464 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvgw2\" (UniqueName: \"kubernetes.io/projected/e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2-kube-api-access-dvgw2\") pod \"dns-operator-744455d44c-w7hp2\" (UID: \"e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309487 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e70d859-42e8-4d15-85be-8456028abbc5-serving-cert\") pod \"openshift-config-operator-7777fb866f-6m2j6\" (UID: \"6e70d859-42e8-4d15-85be-8456028abbc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309510 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06b8a36-c12a-4604-a017-277d9a6a18ff-config\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309533 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bcff\" (UniqueName: \"kubernetes.io/projected/6e70d859-42e8-4d15-85be-8456028abbc5-kube-api-access-9bcff\") pod \"openshift-config-operator-7777fb866f-6m2j6\" (UID: \"6e70d859-42e8-4d15-85be-8456028abbc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309556 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr65g\" (UniqueName: \"kubernetes.io/projected/b06b8a36-c12a-4604-a017-277d9a6a18ff-kube-api-access-dr65g\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309578 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zpk9\" (UniqueName: \"kubernetes.io/projected/4b029a79-9426-4acc-af09-c11c8216777c-kube-api-access-5zpk9\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309615 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309657 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe29ce3-49c3-466c-8e10-02d57dba74fa-trusted-ca\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309678 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/234c3156-bf4c-464d-8ee4-957474f3bb82-console-oauth-config\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309701 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-oauth-serving-cert\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309725 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14f90559-e69f-445c-a61a-2b0f881abb72-audit-dir\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309747 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9s6\" (UniqueName: \"kubernetes.io/projected/c845551a-a148-4bde-9808-f2d8d09c7616-kube-api-access-zm9s6\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309772 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.309791 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c845551a-a148-4bde-9808-f2d8d09c7616-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.308996 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-trusted-ca-bundle\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.310380 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a8b3bec2-0c51-46d8-9b79-22f802b58962-node-pullsecrets\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.310532 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b06b8a36-c12a-4604-a017-277d9a6a18ff-images\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.310759 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-console-config\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.311180 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-audit\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.311177 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/234c3156-bf4c-464d-8ee4-957474f3bb82-oauth-serving-cert\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.311602 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8b3bec2-0c51-46d8-9b79-22f802b58962-etcd-client\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.311738 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe29ce3-49c3-466c-8e10-02d57dba74fa-config\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.312074 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06b8a36-c12a-4604-a017-277d9a6a18ff-config\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.312285 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.312409 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-serving-cert\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.312531 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8b3bec2-0c51-46d8-9b79-22f802b58962-etcd-serving-ca\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.312631 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe29ce3-49c3-466c-8e10-02d57dba74fa-trusted-ca\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.312963 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b3bec2-0c51-46d8-9b79-22f802b58962-serving-cert\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.312993 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.313119 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8b3bec2-0c51-46d8-9b79-22f802b58962-audit-dir\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.313670 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e70d859-42e8-4d15-85be-8456028abbc5-serving-cert\") pod \"openshift-config-operator-7777fb866f-6m2j6\" (UID: \"6e70d859-42e8-4d15-85be-8456028abbc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.314243 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/234c3156-bf4c-464d-8ee4-957474f3bb82-console-oauth-config\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.316230 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c31d8f10-5195-45b7-9809-19edb34d404b-serving-cert\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.316662 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.319605 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.320469 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.322205 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.323518 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.325528 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.328847 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t59j6"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.330726 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.331519 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8st8c"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.332982 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-th48g"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.334801 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fqm67"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.335999 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.336519 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rctq2"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.337256 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rctq2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.339826 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-pjzsr"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.340395 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pjzsr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.340533 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.341698 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.343188 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jctrq"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.345864 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-84jtz"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.346077 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fhrmw"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.348047 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rctq2"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.349633 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dzpcq"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.351508 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pzbrh"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.353278 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.355730 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vtz2p"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.357399 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.359425 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.359786 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.361048 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.362060 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nplkn"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.363733 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.365040 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.367288 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.375589 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.379400 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.381705 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.382059 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fqm67"] Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.401027 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411289 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14f90559-e69f-445c-a61a-2b0f881abb72-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411330 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a29599f5-a122-4891-a597-051ac247d945-proxy-tls\") pod \"machine-config-controller-84d6567774-qsfsf\" (UID: \"a29599f5-a122-4891-a597-051ac247d945\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411357 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411379 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8848f4aa-3570-40a9-bd53-6e22cc7ed795-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qdfvr\" (UID: \"8848f4aa-3570-40a9-bd53-6e22cc7ed795\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411400 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh4l7\" (UniqueName: \"kubernetes.io/projected/8848f4aa-3570-40a9-bd53-6e22cc7ed795-kube-api-access-sh4l7\") pod \"openshift-apiserver-operator-796bbdcf4f-qdfvr\" (UID: \"8848f4aa-3570-40a9-bd53-6e22cc7ed795\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411418 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/14f90559-e69f-445c-a61a-2b0f881abb72-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411436 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/38699438-1690-4770-9e6c-7f7033ed2fea-signing-key\") pod \"service-ca-9c57cc56f-vtz2p\" (UID: \"38699438-1690-4770-9e6c-7f7033ed2fea\") " pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411467 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a189923c-a229-479b-8afa-12d4fe112a83-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411486 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a29599f5-a122-4891-a597-051ac247d945-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qsfsf\" (UID: \"a29599f5-a122-4891-a597-051ac247d945\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411520 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9ba57102-f6a5-41ef-a83c-951795076ab5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tn6z\" (UID: \"9ba57102-f6a5-41ef-a83c-951795076ab5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411541 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c845551a-a148-4bde-9808-f2d8d09c7616-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411558 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/38699438-1690-4770-9e6c-7f7033ed2fea-signing-cabundle\") pod \"service-ca-9c57cc56f-vtz2p\" (UID: \"38699438-1690-4770-9e6c-7f7033ed2fea\") " pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411588 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411608 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411626 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmqsv\" (UniqueName: \"kubernetes.io/projected/f3af21ad-4bfb-4640-9589-c46313fb2379-kube-api-access-lmqsv\") pod \"downloads-7954f5f757-jctrq\" (UID: \"f3af21ad-4bfb-4640-9589-c46313fb2379\") " pod="openshift-console/downloads-7954f5f757-jctrq" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411659 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-policies\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411676 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411692 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwfc2\" (UniqueName: \"kubernetes.io/projected/a29599f5-a122-4891-a597-051ac247d945-kube-api-access-jwfc2\") pod \"machine-config-controller-84d6567774-qsfsf\" (UID: \"a29599f5-a122-4891-a597-051ac247d945\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411710 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqnn8\" (UniqueName: \"kubernetes.io/projected/24e2c7bd-c682-49e7-942c-eb8afe865602-kube-api-access-sqnn8\") pod \"marketplace-operator-79b997595-8st8c\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411736 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fa6201-10ae-43a7-95d2-190b604c2594-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-97xnz\" (UID: \"13fa6201-10ae-43a7-95d2-190b604c2594\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411754 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-777q9\" (UniqueName: \"kubernetes.io/projected/38699438-1690-4770-9e6c-7f7033ed2fea-kube-api-access-777q9\") pod \"service-ca-9c57cc56f-vtz2p\" (UID: \"38699438-1690-4770-9e6c-7f7033ed2fea\") " pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411773 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a189923c-a229-479b-8afa-12d4fe112a83-images\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411808 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl8hh\" (UniqueName: \"kubernetes.io/projected/63b111b7-6fe4-4dba-9c9e-eec6186f2ba2-kube-api-access-pl8hh\") pod \"dns-default-pzbrh\" (UID: \"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2\") " pod="openshift-dns/dns-default-pzbrh" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411831 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdcvl\" (UniqueName: \"kubernetes.io/projected/ae4dff24-ae34-4029-a0a1-30e9a379f091-kube-api-access-jdcvl\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411848 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgpfl\" (UniqueName: \"kubernetes.io/projected/13fa6201-10ae-43a7-95d2-190b604c2594-kube-api-access-lgpfl\") pod \"openshift-controller-manager-operator-756b6f6bc6-97xnz\" (UID: \"13fa6201-10ae-43a7-95d2-190b604c2594\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411864 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c4ee0d0-175d-436c-9161-2822246aacec-secret-volume\") pod \"collect-profiles-29424960-nn5xr\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411882 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-trusted-ca\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411899 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-857sk\" (UniqueName: \"kubernetes.io/projected/fe29aecd-7402-47c3-a15a-d5a489c48b29-kube-api-access-857sk\") pod \"image-pruner-29424960-ttvzz\" (UID: \"fe29aecd-7402-47c3-a15a-d5a489c48b29\") " pod="openshift-image-registry/image-pruner-29424960-ttvzz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411917 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13fa6201-10ae-43a7-95d2-190b604c2594-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-97xnz\" (UID: \"13fa6201-10ae-43a7-95d2-190b604c2594\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411936 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-metrics-tls\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411966 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f90559-e69f-445c-a61a-2b0f881abb72-serving-cert\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.411985 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a189923c-a229-479b-8afa-12d4fe112a83-proxy-tls\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412029 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8848f4aa-3570-40a9-bd53-6e22cc7ed795-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qdfvr\" (UID: \"8848f4aa-3570-40a9-bd53-6e22cc7ed795\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412050 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64rr6\" (UniqueName: \"kubernetes.io/projected/14f90559-e69f-445c-a61a-2b0f881abb72-kube-api-access-64rr6\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412070 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412098 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412121 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/14f90559-e69f-445c-a61a-2b0f881abb72-encryption-config\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412138 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c4ee0d0-175d-436c-9161-2822246aacec-config-volume\") pod \"collect-profiles-29424960-nn5xr\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412160 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63b111b7-6fe4-4dba-9c9e-eec6186f2ba2-metrics-tls\") pod \"dns-default-pzbrh\" (UID: \"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2\") " pod="openshift-dns/dns-default-pzbrh" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412205 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c845551a-a148-4bde-9808-f2d8d09c7616-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412858 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-client-ca\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412933 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-dir\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412970 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbs52\" (UniqueName: \"kubernetes.io/projected/9ba57102-f6a5-41ef-a83c-951795076ab5-kube-api-access-sbs52\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tn6z\" (UID: \"9ba57102-f6a5-41ef-a83c-951795076ab5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.413004 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.412042 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14f90559-e69f-445c-a61a-2b0f881abb72-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.413139 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-dir\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.413817 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8848f4aa-3570-40a9-bd53-6e22cc7ed795-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qdfvr\" (UID: \"8848f4aa-3570-40a9-bd53-6e22cc7ed795\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.413824 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/14f90559-e69f-445c-a61a-2b0f881abb72-audit-policies\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.414020 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fa6201-10ae-43a7-95d2-190b604c2594-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-97xnz\" (UID: \"13fa6201-10ae-43a7-95d2-190b604c2594\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.414392 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-client-ca\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.414453 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/14f90559-e69f-445c-a61a-2b0f881abb72-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.414487 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zpk9\" (UniqueName: \"kubernetes.io/projected/4b029a79-9426-4acc-af09-c11c8216777c-kube-api-access-5zpk9\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.414527 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.414707 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/14f90559-e69f-445c-a61a-2b0f881abb72-audit-policies\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.414865 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14f90559-e69f-445c-a61a-2b0f881abb72-audit-dir\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.414877 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-policies\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.414899 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9s6\" (UniqueName: \"kubernetes.io/projected/c845551a-a148-4bde-9808-f2d8d09c7616-kube-api-access-zm9s6\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.414949 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14f90559-e69f-445c-a61a-2b0f881abb72-audit-dir\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415049 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415079 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415093 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c845551a-a148-4bde-9808-f2d8d09c7616-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415187 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8st8c\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415295 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67vbc\" (UniqueName: \"kubernetes.io/projected/5c4ee0d0-175d-436c-9161-2822246aacec-kube-api-access-67vbc\") pod \"collect-profiles-29424960-nn5xr\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415334 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415407 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe29aecd-7402-47c3-a15a-d5a489c48b29-serviceca\") pod \"image-pruner-29424960-ttvzz\" (UID: \"fe29aecd-7402-47c3-a15a-d5a489c48b29\") " pod="openshift-image-registry/image-pruner-29424960-ttvzz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415448 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb4ch\" (UniqueName: \"kubernetes.io/projected/a189923c-a229-479b-8afa-12d4fe112a83-kube-api-access-nb4ch\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415630 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415752 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-config\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415774 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415798 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4blmp\" (UniqueName: \"kubernetes.io/projected/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-kube-api-access-4blmp\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415828 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8st8c\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415854 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63b111b7-6fe4-4dba-9c9e-eec6186f2ba2-config-volume\") pod \"dns-default-pzbrh\" (UID: \"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2\") " pod="openshift-dns/dns-default-pzbrh" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.415972 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b029a79-9426-4acc-af09-c11c8216777c-serving-cert\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.416020 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/14f90559-e69f-445c-a61a-2b0f881abb72-etcd-client\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.416052 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.416458 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.417207 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-config\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.417712 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.417859 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.418148 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63b111b7-6fe4-4dba-9c9e-eec6186f2ba2-config-volume\") pod \"dns-default-pzbrh\" (UID: \"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2\") " pod="openshift-dns/dns-default-pzbrh" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.418155 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/14f90559-e69f-445c-a61a-2b0f881abb72-encryption-config\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.418222 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.418436 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63b111b7-6fe4-4dba-9c9e-eec6186f2ba2-metrics-tls\") pod \"dns-default-pzbrh\" (UID: \"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2\") " pod="openshift-dns/dns-default-pzbrh" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.418367 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f90559-e69f-445c-a61a-2b0f881abb72-serving-cert\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.419398 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.419874 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b029a79-9426-4acc-af09-c11c8216777c-serving-cert\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.419906 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c845551a-a148-4bde-9808-f2d8d09c7616-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.420112 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.420559 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe29aecd-7402-47c3-a15a-d5a489c48b29-serviceca\") pod \"image-pruner-29424960-ttvzz\" (UID: \"fe29aecd-7402-47c3-a15a-d5a489c48b29\") " pod="openshift-image-registry/image-pruner-29424960-ttvzz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.420692 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13fa6201-10ae-43a7-95d2-190b604c2594-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-97xnz\" (UID: \"13fa6201-10ae-43a7-95d2-190b604c2594\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.421142 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.421280 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/14f90559-e69f-445c-a61a-2b0f881abb72-etcd-client\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.421581 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.421609 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.421593 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.422817 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8848f4aa-3570-40a9-bd53-6e22cc7ed795-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qdfvr\" (UID: \"8848f4aa-3570-40a9-bd53-6e22cc7ed795\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.446062 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.453062 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c845551a-a148-4bde-9808-f2d8d09c7616-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.481707 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.500233 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.516857 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a189923c-a229-479b-8afa-12d4fe112a83-proxy-tls\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.516905 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c4ee0d0-175d-436c-9161-2822246aacec-config-volume\") pod \"collect-profiles-29424960-nn5xr\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.516928 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbs52\" (UniqueName: \"kubernetes.io/projected/9ba57102-f6a5-41ef-a83c-951795076ab5-kube-api-access-sbs52\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tn6z\" (UID: \"9ba57102-f6a5-41ef-a83c-951795076ab5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517000 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8st8c\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517022 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67vbc\" (UniqueName: \"kubernetes.io/projected/5c4ee0d0-175d-436c-9161-2822246aacec-kube-api-access-67vbc\") pod \"collect-profiles-29424960-nn5xr\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517040 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517060 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb4ch\" (UniqueName: \"kubernetes.io/projected/a189923c-a229-479b-8afa-12d4fe112a83-kube-api-access-nb4ch\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517093 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4blmp\" (UniqueName: \"kubernetes.io/projected/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-kube-api-access-4blmp\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517116 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8st8c\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517160 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a29599f5-a122-4891-a597-051ac247d945-proxy-tls\") pod \"machine-config-controller-84d6567774-qsfsf\" (UID: \"a29599f5-a122-4891-a597-051ac247d945\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517186 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/38699438-1690-4770-9e6c-7f7033ed2fea-signing-key\") pod \"service-ca-9c57cc56f-vtz2p\" (UID: \"38699438-1690-4770-9e6c-7f7033ed2fea\") " pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517208 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a189923c-a229-479b-8afa-12d4fe112a83-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517236 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a29599f5-a122-4891-a597-051ac247d945-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qsfsf\" (UID: \"a29599f5-a122-4891-a597-051ac247d945\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517255 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/38699438-1690-4770-9e6c-7f7033ed2fea-signing-cabundle\") pod \"service-ca-9c57cc56f-vtz2p\" (UID: \"38699438-1690-4770-9e6c-7f7033ed2fea\") " pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517272 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9ba57102-f6a5-41ef-a83c-951795076ab5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tn6z\" (UID: \"9ba57102-f6a5-41ef-a83c-951795076ab5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517306 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwfc2\" (UniqueName: \"kubernetes.io/projected/a29599f5-a122-4891-a597-051ac247d945-kube-api-access-jwfc2\") pod \"machine-config-controller-84d6567774-qsfsf\" (UID: \"a29599f5-a122-4891-a597-051ac247d945\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517333 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-777q9\" (UniqueName: \"kubernetes.io/projected/38699438-1690-4770-9e6c-7f7033ed2fea-kube-api-access-777q9\") pod \"service-ca-9c57cc56f-vtz2p\" (UID: \"38699438-1690-4770-9e6c-7f7033ed2fea\") " pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517349 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqnn8\" (UniqueName: \"kubernetes.io/projected/24e2c7bd-c682-49e7-942c-eb8afe865602-kube-api-access-sqnn8\") pod \"marketplace-operator-79b997595-8st8c\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517366 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a189923c-a229-479b-8afa-12d4fe112a83-images\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517456 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c4ee0d0-175d-436c-9161-2822246aacec-secret-volume\") pod \"collect-profiles-29424960-nn5xr\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517476 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-trusted-ca\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.517562 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-metrics-tls\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.518257 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a189923c-a229-479b-8afa-12d4fe112a83-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.518629 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a29599f5-a122-4891-a597-051ac247d945-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qsfsf\" (UID: \"a29599f5-a122-4891-a597-051ac247d945\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.520464 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.541064 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.560783 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.580464 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.601292 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.601326 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.601381 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.601315 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.621227 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.647916 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.649056 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-trusted-ca\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.660463 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.673492 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-metrics-tls\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.681842 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.701382 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.720501 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.742034 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.761317 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.779989 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.799994 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.821011 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.841835 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.865028 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.873184 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8st8c\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.880886 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.912292 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.920313 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8st8c\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.922602 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.941263 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.961799 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 12 00:07:57 crc kubenswrapper[4917]: I1212 00:07:57.982069 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.001954 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.021500 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.041580 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.051729 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9ba57102-f6a5-41ef-a83c-951795076ab5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tn6z\" (UID: \"9ba57102-f6a5-41ef-a83c-951795076ab5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.060860 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.080856 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.100998 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.121103 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.141450 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.160595 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.181441 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.200424 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.220410 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.240970 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.259759 4917 request.go:700] Waited for 1.015560802s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0 Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.260971 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.272201 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/38699438-1690-4770-9e6c-7f7033ed2fea-signing-key\") pod \"service-ca-9c57cc56f-vtz2p\" (UID: \"38699438-1690-4770-9e6c-7f7033ed2fea\") " pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.281366 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.303001 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.309917 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/38699438-1690-4770-9e6c-7f7033ed2fea-signing-cabundle\") pod \"service-ca-9c57cc56f-vtz2p\" (UID: \"38699438-1690-4770-9e6c-7f7033ed2fea\") " pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.321854 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.340942 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.361933 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.381615 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.392950 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c4ee0d0-175d-436c-9161-2822246aacec-secret-volume\") pod \"collect-profiles-29424960-nn5xr\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.401389 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.421942 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.440828 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.461407 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.471756 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a29599f5-a122-4891-a597-051ac247d945-proxy-tls\") pod \"machine-config-controller-84d6567774-qsfsf\" (UID: \"a29599f5-a122-4891-a597-051ac247d945\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.482422 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.501831 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 12 00:07:58 crc kubenswrapper[4917]: E1212 00:07:58.517727 4917 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Dec 12 00:07:58 crc kubenswrapper[4917]: E1212 00:07:58.517816 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c4ee0d0-175d-436c-9161-2822246aacec-config-volume podName:5c4ee0d0-175d-436c-9161-2822246aacec nodeName:}" failed. No retries permitted until 2025-12-12 00:07:59.017785064 +0000 UTC m=+113.795585887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5c4ee0d0-175d-436c-9161-2822246aacec-config-volume") pod "collect-profiles-29424960-nn5xr" (UID: "5c4ee0d0-175d-436c-9161-2822246aacec") : failed to sync configmap cache: timed out waiting for the condition Dec 12 00:07:58 crc kubenswrapper[4917]: E1212 00:07:58.518081 4917 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 12 00:07:58 crc kubenswrapper[4917]: E1212 00:07:58.518150 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a189923c-a229-479b-8afa-12d4fe112a83-proxy-tls podName:a189923c-a229-479b-8afa-12d4fe112a83 nodeName:}" failed. No retries permitted until 2025-12-12 00:07:59.018137802 +0000 UTC m=+113.795938625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/a189923c-a229-479b-8afa-12d4fe112a83-proxy-tls") pod "machine-config-operator-74547568cd-tkfhd" (UID: "a189923c-a229-479b-8afa-12d4fe112a83") : failed to sync secret cache: timed out waiting for the condition Dec 12 00:07:58 crc kubenswrapper[4917]: E1212 00:07:58.518230 4917 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Dec 12 00:07:58 crc kubenswrapper[4917]: E1212 00:07:58.518338 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a189923c-a229-479b-8afa-12d4fe112a83-images podName:a189923c-a229-479b-8afa-12d4fe112a83 nodeName:}" failed. No retries permitted until 2025-12-12 00:07:59.018322467 +0000 UTC m=+113.796123290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/a189923c-a229-479b-8afa-12d4fe112a83-images") pod "machine-config-operator-74547568cd-tkfhd" (UID: "a189923c-a229-479b-8afa-12d4fe112a83") : failed to sync configmap cache: timed out waiting for the condition Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.521559 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.542295 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.562128 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.581906 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.601569 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.601763 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.621562 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.641390 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.660841 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.681125 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.701541 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.721242 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.741309 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.761746 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.782088 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.800481 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.821613 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.841314 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.861773 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.882157 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.901584 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.920726 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.941461 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 12 00:07:58 crc kubenswrapper[4917]: I1212 00:07:58.982843 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scwhf\" (UniqueName: \"kubernetes.io/projected/234c3156-bf4c-464d-8ee4-957474f3bb82-kube-api-access-scwhf\") pod \"console-f9d7485db-8frb7\" (UID: \"234c3156-bf4c-464d-8ee4-957474f3bb82\") " pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.002297 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdsjz\" (UniqueName: \"kubernetes.io/projected/e8edfd21-19a8-499c-b67b-93883083c239-kube-api-access-qdsjz\") pod \"machine-approver-56656f9798-t9t5t\" (UID: \"e8edfd21-19a8-499c-b67b-93883083c239\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.021182 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95hdn\" (UniqueName: \"kubernetes.io/projected/c31d8f10-5195-45b7-9809-19edb34d404b-kube-api-access-95hdn\") pod \"controller-manager-879f6c89f-h6z88\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.047980 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a189923c-a229-479b-8afa-12d4fe112a83-images\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.048166 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a189923c-a229-479b-8afa-12d4fe112a83-proxy-tls\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.048229 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c4ee0d0-175d-436c-9161-2822246aacec-config-volume\") pod \"collect-profiles-29424960-nn5xr\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.049289 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a189923c-a229-479b-8afa-12d4fe112a83-images\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.049477 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c4ee0d0-175d-436c-9161-2822246aacec-config-volume\") pod \"collect-profiles-29424960-nn5xr\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.052016 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a189923c-a229-479b-8afa-12d4fe112a83-proxy-tls\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.063015 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dr4f\" (UniqueName: \"kubernetes.io/projected/a8b3bec2-0c51-46d8-9b79-22f802b58962-kube-api-access-9dr4f\") pod \"apiserver-76f77b778f-lss4q\" (UID: \"a8b3bec2-0c51-46d8-9b79-22f802b58962\") " pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.080213 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqv95\" (UniqueName: \"kubernetes.io/projected/dbe29ce3-49c3-466c-8e10-02d57dba74fa-kube-api-access-qqv95\") pod \"console-operator-58897d9998-742lr\" (UID: \"dbe29ce3-49c3-466c-8e10-02d57dba74fa\") " pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.101625 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcplc\" (UniqueName: \"kubernetes.io/projected/818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5-kube-api-access-mcplc\") pod \"cluster-samples-operator-665b6dd947-bq5kv\" (UID: \"818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.122629 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvj5l\" (UniqueName: \"kubernetes.io/projected/09fd0f32-0d7e-48c7-a673-ca45f8db8e8e-kube-api-access-jvj5l\") pod \"authentication-operator-69f744f599-dzpcq\" (UID: \"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.138732 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvgw2\" (UniqueName: \"kubernetes.io/projected/e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2-kube-api-access-dvgw2\") pod \"dns-operator-744455d44c-w7hp2\" (UID: \"e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.151146 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.167119 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.167902 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bcff\" (UniqueName: \"kubernetes.io/projected/6e70d859-42e8-4d15-85be-8456028abbc5-kube-api-access-9bcff\") pod \"openshift-config-operator-7777fb866f-6m2j6\" (UID: \"6e70d859-42e8-4d15-85be-8456028abbc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.178243 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr65g\" (UniqueName: \"kubernetes.io/projected/b06b8a36-c12a-4604-a017-277d9a6a18ff-kube-api-access-dr65g\") pod \"machine-api-operator-5694c8668f-84jtz\" (UID: \"b06b8a36-c12a-4604-a017-277d9a6a18ff\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.181163 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.186825 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.201460 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.208982 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.213589 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" event={"ID":"e8edfd21-19a8-499c-b67b-93883083c239","Type":"ContainerStarted","Data":"a60fdd9e377ba9e04f53e37382e073db9f9fca2f140ddcc0c1d6ec02cc68d05e"} Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.221364 4917 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.241904 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.242141 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.260264 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.281238 4917 request.go:700] Waited for 1.942196696s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.283725 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.300507 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.333727 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.333852 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.342281 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.347347 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.360560 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.389209 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.394593 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8frb7"] Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.403626 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.408204 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh4l7\" (UniqueName: \"kubernetes.io/projected/8848f4aa-3570-40a9-bd53-6e22cc7ed795-kube-api-access-sh4l7\") pod \"openshift-apiserver-operator-796bbdcf4f-qdfvr\" (UID: \"8848f4aa-3570-40a9-bd53-6e22cc7ed795\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.419584 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdcvl\" (UniqueName: \"kubernetes.io/projected/ae4dff24-ae34-4029-a0a1-30e9a379f091-kube-api-access-jdcvl\") pod \"oauth-openshift-558db77b4-5dp8w\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.428488 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-742lr"] Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.440611 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgpfl\" (UniqueName: \"kubernetes.io/projected/13fa6201-10ae-43a7-95d2-190b604c2594-kube-api-access-lgpfl\") pod \"openshift-controller-manager-operator-756b6f6bc6-97xnz\" (UID: \"13fa6201-10ae-43a7-95d2-190b604c2594\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.444296 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.461996 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64rr6\" (UniqueName: \"kubernetes.io/projected/14f90559-e69f-445c-a61a-2b0f881abb72-kube-api-access-64rr6\") pod \"apiserver-7bbb656c7d-6cn77\" (UID: \"14f90559-e69f-445c-a61a-2b0f881abb72\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.474348 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.479290 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl8hh\" (UniqueName: \"kubernetes.io/projected/63b111b7-6fe4-4dba-9c9e-eec6186f2ba2-kube-api-access-pl8hh\") pod \"dns-default-pzbrh\" (UID: \"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2\") " pod="openshift-dns/dns-default-pzbrh" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.494895 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pzbrh" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.498961 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zpk9\" (UniqueName: \"kubernetes.io/projected/4b029a79-9426-4acc-af09-c11c8216777c-kube-api-access-5zpk9\") pod \"route-controller-manager-6576b87f9c-t8zct\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.518133 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmqsv\" (UniqueName: \"kubernetes.io/projected/f3af21ad-4bfb-4640-9589-c46313fb2379-kube-api-access-lmqsv\") pod \"downloads-7954f5f757-jctrq\" (UID: \"f3af21ad-4bfb-4640-9589-c46313fb2379\") " pod="openshift-console/downloads-7954f5f757-jctrq" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.549996 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-857sk\" (UniqueName: \"kubernetes.io/projected/fe29aecd-7402-47c3-a15a-d5a489c48b29-kube-api-access-857sk\") pod \"image-pruner-29424960-ttvzz\" (UID: \"fe29aecd-7402-47c3-a15a-d5a489c48b29\") " pod="openshift-image-registry/image-pruner-29424960-ttvzz" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.556299 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.578700 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9s6\" (UniqueName: \"kubernetes.io/projected/c845551a-a148-4bde-9808-f2d8d09c7616-kube-api-access-zm9s6\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.583999 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.584432 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c845551a-a148-4bde-9808-f2d8d09c7616-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxz2\" (UID: \"c845551a-a148-4bde-9808-f2d8d09c7616\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.629144 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.639257 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbs52\" (UniqueName: \"kubernetes.io/projected/9ba57102-f6a5-41ef-a83c-951795076ab5-kube-api-access-sbs52\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tn6z\" (UID: \"9ba57102-f6a5-41ef-a83c-951795076ab5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.642714 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67vbc\" (UniqueName: \"kubernetes.io/projected/5c4ee0d0-175d-436c-9161-2822246aacec-kube-api-access-67vbc\") pod \"collect-profiles-29424960-nn5xr\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.653077 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.659452 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.677098 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29424960-ttvzz" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.683207 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.690868 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb4ch\" (UniqueName: \"kubernetes.io/projected/a189923c-a229-479b-8afa-12d4fe112a83-kube-api-access-nb4ch\") pod \"machine-config-operator-74547568cd-tkfhd\" (UID: \"a189923c-a229-479b-8afa-12d4fe112a83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.712212 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4blmp\" (UniqueName: \"kubernetes.io/projected/89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3-kube-api-access-4blmp\") pod \"ingress-operator-5b745b69d9-4fk57\" (UID: \"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.726442 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwfc2\" (UniqueName: \"kubernetes.io/projected/a29599f5-a122-4891-a597-051ac247d945-kube-api-access-jwfc2\") pod \"machine-config-controller-84d6567774-qsfsf\" (UID: \"a29599f5-a122-4891-a597-051ac247d945\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.735668 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jctrq" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.738861 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lss4q"] Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.739130 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h6z88"] Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.749014 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-777q9\" (UniqueName: \"kubernetes.io/projected/38699438-1690-4770-9e6c-7f7033ed2fea-kube-api-access-777q9\") pod \"service-ca-9c57cc56f-vtz2p\" (UID: \"38699438-1690-4770-9e6c-7f7033ed2fea\") " pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.757718 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqnn8\" (UniqueName: \"kubernetes.io/projected/24e2c7bd-c682-49e7-942c-eb8afe865602-kube-api-access-sqnn8\") pod \"marketplace-operator-79b997595-8st8c\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.761344 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 12 00:07:59 crc kubenswrapper[4917]: W1212 00:07:59.775482 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8b3bec2_0c51_46d8_9b79_22f802b58962.slice/crio-10412b328a7398fba0bc02711974bdb262c2ce72747ab0c16ae44d2a87930ef7 WatchSource:0}: Error finding container 10412b328a7398fba0bc02711974bdb262c2ce72747ab0c16ae44d2a87930ef7: Status 404 returned error can't find the container with id 10412b328a7398fba0bc02711974bdb262c2ce72747ab0c16ae44d2a87930ef7 Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.781526 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.801099 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.802687 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.824874 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.841538 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.854830 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.862559 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.868221 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.882307 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.908963 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.922090 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.953932 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6"] Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.969922 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.969725 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-metrics-certs\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970166 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-serving-cert\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970285 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvfg9\" (UniqueName: \"kubernetes.io/projected/773a5eef-2c13-4979-95c7-f2ad9edf9783-kube-api-access-cvfg9\") pod \"kube-storage-version-migrator-operator-b67b599dd-77mdt\" (UID: \"773a5eef-2c13-4979-95c7-f2ad9edf9783\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970320 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/773a5eef-2c13-4979-95c7-f2ad9edf9783-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-77mdt\" (UID: \"773a5eef-2c13-4979-95c7-f2ad9edf9783\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970342 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4v8w\" (UniqueName: \"kubernetes.io/projected/d69caa5d-484b-4415-8c05-61b9a2729dd7-kube-api-access-p4v8w\") pod \"olm-operator-6b444d44fb-9th2b\" (UID: \"d69caa5d-484b-4415-8c05-61b9a2729dd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970360 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82xhm\" (UniqueName: \"kubernetes.io/projected/6d6ecfb5-06aa-4f53-bc43-74d425b172db-kube-api-access-82xhm\") pod \"service-ca-operator-777779d784-t59j6\" (UID: \"6d6ecfb5-06aa-4f53-bc43-74d425b172db\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970383 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9f9b3d-479f-4203-be65-5c71e7ee86f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bslfb\" (UID: \"fd9f9b3d-479f-4203-be65-5c71e7ee86f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970420 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be440828-8884-4cd3-b30e-4eba825caa3b-srv-cert\") pod \"catalog-operator-68c6474976-pv86t\" (UID: \"be440828-8884-4cd3-b30e-4eba825caa3b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970441 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd9f9b3d-479f-4203-be65-5c71e7ee86f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bslfb\" (UID: \"fd9f9b3d-479f-4203-be65-5c71e7ee86f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970466 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llmm4\" (UniqueName: \"kubernetes.io/projected/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-kube-api-access-llmm4\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970500 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac4662d0-8501-4627-81b8-fdfffff90309-ca-trust-extracted\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970538 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fkft\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-kube-api-access-8fkft\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970573 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8caaf389-4a14-4f76-b767-5f70b6a2b14d-config\") pod \"kube-controller-manager-operator-78b949d7b-6286r\" (UID: \"8caaf389-4a14-4f76-b767-5f70b6a2b14d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970604 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-service-ca-bundle\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970702 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970763 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-stats-auth\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970829 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd9f9b3d-479f-4203-be65-5c71e7ee86f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bslfb\" (UID: \"fd9f9b3d-479f-4203-be65-5c71e7ee86f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970877 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22813008-d948-4643-a6d2-688ce469bada-webhook-cert\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970900 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8caaf389-4a14-4f76-b767-5f70b6a2b14d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6286r\" (UID: \"8caaf389-4a14-4f76-b767-5f70b6a2b14d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970948 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jgbs\" (UniqueName: \"kubernetes.io/projected/155eee0b-00b8-4e1e-9654-e811f70e9bdf-kube-api-access-8jgbs\") pod \"migrator-59844c95c7-b88pp\" (UID: \"155eee0b-00b8-4e1e-9654-e811f70e9bdf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.970990 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c70e2d16-2055-43e5-8d36-27d97a4c013e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ccxrr\" (UID: \"c70e2d16-2055-43e5-8d36-27d97a4c013e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" Dec 12 00:07:59 crc kubenswrapper[4917]: E1212 00:07:59.971016 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:00.47100237 +0000 UTC m=+115.248803183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.971039 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/22813008-d948-4643-a6d2-688ce469bada-tmpfs\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.971057 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22813008-d948-4643-a6d2-688ce469bada-apiservice-cert\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.971073 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51ad4779-871d-4324-af65-41e72c202e37-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b9wvw\" (UID: \"51ad4779-871d-4324-af65-41e72c202e37\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.971090 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7h4l\" (UniqueName: \"kubernetes.io/projected/51ad4779-871d-4324-af65-41e72c202e37-kube-api-access-z7h4l\") pod \"package-server-manager-789f6589d5-b9wvw\" (UID: \"51ad4779-871d-4324-af65-41e72c202e37\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.971304 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d6ecfb5-06aa-4f53-bc43-74d425b172db-serving-cert\") pod \"service-ca-operator-777779d784-t59j6\" (UID: \"6d6ecfb5-06aa-4f53-bc43-74d425b172db\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.971351 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm4w2\" (UniqueName: \"kubernetes.io/projected/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-kube-api-access-mm4w2\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.973485 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-registry-certificates\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.973557 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773a5eef-2c13-4979-95c7-f2ad9edf9783-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-77mdt\" (UID: \"773a5eef-2c13-4979-95c7-f2ad9edf9783\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.973882 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9vhr\" (UniqueName: \"kubernetes.io/projected/9e3540e9-d5bc-43cb-bc1b-28787574cd08-kube-api-access-l9vhr\") pod \"multus-admission-controller-857f4d67dd-nplkn\" (UID: \"9e3540e9-d5bc-43cb-bc1b-28787574cd08\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.974379 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac4662d0-8501-4627-81b8-fdfffff90309-installation-pull-secrets\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.974417 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-trusted-ca\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.974461 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-default-certificate\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.975473 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8caaf389-4a14-4f76-b767-5f70b6a2b14d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6286r\" (UID: \"8caaf389-4a14-4f76-b767-5f70b6a2b14d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.975555 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-etcd-ca\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.975678 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c70e2d16-2055-43e5-8d36-27d97a4c013e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ccxrr\" (UID: \"c70e2d16-2055-43e5-8d36-27d97a4c013e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.975773 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c70e2d16-2055-43e5-8d36-27d97a4c013e-config\") pod \"kube-apiserver-operator-766d6c64bb-ccxrr\" (UID: \"c70e2d16-2055-43e5-8d36-27d97a4c013e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.975928 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d69caa5d-484b-4415-8c05-61b9a2729dd7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9th2b\" (UID: \"d69caa5d-484b-4415-8c05-61b9a2729dd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.976047 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-bound-sa-token\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.976253 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-registry-tls\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.976275 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-etcd-client\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.976310 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-etcd-service-ca\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.976347 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d69caa5d-484b-4415-8c05-61b9a2729dd7-srv-cert\") pod \"olm-operator-6b444d44fb-9th2b\" (UID: \"d69caa5d-484b-4415-8c05-61b9a2729dd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.976368 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/be440828-8884-4cd3-b30e-4eba825caa3b-profile-collector-cert\") pod \"catalog-operator-68c6474976-pv86t\" (UID: \"be440828-8884-4cd3-b30e-4eba825caa3b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.976433 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-config\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.976451 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-774zt\" (UniqueName: \"kubernetes.io/projected/be440828-8884-4cd3-b30e-4eba825caa3b-kube-api-access-774zt\") pod \"catalog-operator-68c6474976-pv86t\" (UID: \"be440828-8884-4cd3-b30e-4eba825caa3b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.976468 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6ecfb5-06aa-4f53-bc43-74d425b172db-config\") pod \"service-ca-operator-777779d784-t59j6\" (UID: \"6d6ecfb5-06aa-4f53-bc43-74d425b172db\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.976491 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shrgz\" (UniqueName: \"kubernetes.io/projected/22813008-d948-4643-a6d2-688ce469bada-kube-api-access-shrgz\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:07:59 crc kubenswrapper[4917]: I1212 00:07:59.976536 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9e3540e9-d5bc-43cb-bc1b-28787574cd08-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nplkn\" (UID: \"9e3540e9-d5bc-43cb-bc1b-28787574cd08\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077060 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077261 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd9f9b3d-479f-4203-be65-5c71e7ee86f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bslfb\" (UID: \"fd9f9b3d-479f-4203-be65-5c71e7ee86f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077297 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llmm4\" (UniqueName: \"kubernetes.io/projected/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-kube-api-access-llmm4\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077316 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac4662d0-8501-4627-81b8-fdfffff90309-ca-trust-extracted\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077344 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fkft\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-kube-api-access-8fkft\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077370 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8caaf389-4a14-4f76-b767-5f70b6a2b14d-config\") pod \"kube-controller-manager-operator-78b949d7b-6286r\" (UID: \"8caaf389-4a14-4f76-b767-5f70b6a2b14d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077397 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-csi-data-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077431 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-service-ca-bundle\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077482 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-stats-auth\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077509 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd9f9b3d-479f-4203-be65-5c71e7ee86f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bslfb\" (UID: \"fd9f9b3d-479f-4203-be65-5c71e7ee86f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077570 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22813008-d948-4643-a6d2-688ce469bada-webhook-cert\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077598 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8caaf389-4a14-4f76-b767-5f70b6a2b14d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6286r\" (UID: \"8caaf389-4a14-4f76-b767-5f70b6a2b14d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077659 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jgbs\" (UniqueName: \"kubernetes.io/projected/155eee0b-00b8-4e1e-9654-e811f70e9bdf-kube-api-access-8jgbs\") pod \"migrator-59844c95c7-b88pp\" (UID: \"155eee0b-00b8-4e1e-9654-e811f70e9bdf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077676 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/16e61a0d-853c-4b17-b3c8-a12bed76808a-certs\") pod \"machine-config-server-pjzsr\" (UID: \"16e61a0d-853c-4b17-b3c8-a12bed76808a\") " pod="openshift-machine-config-operator/machine-config-server-pjzsr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077693 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51ad4779-871d-4324-af65-41e72c202e37-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b9wvw\" (UID: \"51ad4779-871d-4324-af65-41e72c202e37\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077711 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7h4l\" (UniqueName: \"kubernetes.io/projected/51ad4779-871d-4324-af65-41e72c202e37-kube-api-access-z7h4l\") pod \"package-server-manager-789f6589d5-b9wvw\" (UID: \"51ad4779-871d-4324-af65-41e72c202e37\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077728 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c70e2d16-2055-43e5-8d36-27d97a4c013e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ccxrr\" (UID: \"c70e2d16-2055-43e5-8d36-27d97a4c013e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077744 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/22813008-d948-4643-a6d2-688ce469bada-tmpfs\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077759 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22813008-d948-4643-a6d2-688ce469bada-apiservice-cert\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077776 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-mountpoint-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077803 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d6ecfb5-06aa-4f53-bc43-74d425b172db-serving-cert\") pod \"service-ca-operator-777779d784-t59j6\" (UID: \"6d6ecfb5-06aa-4f53-bc43-74d425b172db\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077821 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm4w2\" (UniqueName: \"kubernetes.io/projected/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-kube-api-access-mm4w2\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077852 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-registry-certificates\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077878 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9vhr\" (UniqueName: \"kubernetes.io/projected/9e3540e9-d5bc-43cb-bc1b-28787574cd08-kube-api-access-l9vhr\") pod \"multus-admission-controller-857f4d67dd-nplkn\" (UID: \"9e3540e9-d5bc-43cb-bc1b-28787574cd08\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077895 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773a5eef-2c13-4979-95c7-f2ad9edf9783-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-77mdt\" (UID: \"773a5eef-2c13-4979-95c7-f2ad9edf9783\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077918 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-plugins-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.077986 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac4662d0-8501-4627-81b8-fdfffff90309-installation-pull-secrets\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078013 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-trusted-ca\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078030 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-default-certificate\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078056 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8e32995-fe25-4df9-8f90-8ce9b3086686-cert\") pod \"ingress-canary-rctq2\" (UID: \"d8e32995-fe25-4df9-8f90-8ce9b3086686\") " pod="openshift-ingress-canary/ingress-canary-rctq2" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078081 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/16e61a0d-853c-4b17-b3c8-a12bed76808a-node-bootstrap-token\") pod \"machine-config-server-pjzsr\" (UID: \"16e61a0d-853c-4b17-b3c8-a12bed76808a\") " pod="openshift-machine-config-operator/machine-config-server-pjzsr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078100 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8caaf389-4a14-4f76-b767-5f70b6a2b14d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6286r\" (UID: \"8caaf389-4a14-4f76-b767-5f70b6a2b14d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078126 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-etcd-ca\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078142 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7657\" (UniqueName: \"kubernetes.io/projected/6e0d83f2-3607-4608-b778-92e9cc6eb572-kube-api-access-d7657\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078169 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c70e2d16-2055-43e5-8d36-27d97a4c013e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ccxrr\" (UID: \"c70e2d16-2055-43e5-8d36-27d97a4c013e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078194 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c70e2d16-2055-43e5-8d36-27d97a4c013e-config\") pod \"kube-apiserver-operator-766d6c64bb-ccxrr\" (UID: \"c70e2d16-2055-43e5-8d36-27d97a4c013e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078248 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d69caa5d-484b-4415-8c05-61b9a2729dd7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9th2b\" (UID: \"d69caa5d-484b-4415-8c05-61b9a2729dd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078275 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-bound-sa-token\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078310 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-registry-tls\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078351 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-etcd-client\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078387 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-etcd-service-ca\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078423 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d69caa5d-484b-4415-8c05-61b9a2729dd7-srv-cert\") pod \"olm-operator-6b444d44fb-9th2b\" (UID: \"d69caa5d-484b-4415-8c05-61b9a2729dd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078449 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/be440828-8884-4cd3-b30e-4eba825caa3b-profile-collector-cert\") pod \"catalog-operator-68c6474976-pv86t\" (UID: \"be440828-8884-4cd3-b30e-4eba825caa3b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078474 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-registration-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078495 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6ecfb5-06aa-4f53-bc43-74d425b172db-config\") pod \"service-ca-operator-777779d784-t59j6\" (UID: \"6d6ecfb5-06aa-4f53-bc43-74d425b172db\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078512 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shrgz\" (UniqueName: \"kubernetes.io/projected/22813008-d948-4643-a6d2-688ce469bada-kube-api-access-shrgz\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078528 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-config\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078544 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-774zt\" (UniqueName: \"kubernetes.io/projected/be440828-8884-4cd3-b30e-4eba825caa3b-kube-api-access-774zt\") pod \"catalog-operator-68c6474976-pv86t\" (UID: \"be440828-8884-4cd3-b30e-4eba825caa3b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078562 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t96d\" (UniqueName: \"kubernetes.io/projected/16e61a0d-853c-4b17-b3c8-a12bed76808a-kube-api-access-9t96d\") pod \"machine-config-server-pjzsr\" (UID: \"16e61a0d-853c-4b17-b3c8-a12bed76808a\") " pod="openshift-machine-config-operator/machine-config-server-pjzsr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078577 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9e3540e9-d5bc-43cb-bc1b-28787574cd08-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nplkn\" (UID: \"9e3540e9-d5bc-43cb-bc1b-28787574cd08\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078638 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-socket-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078673 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-serving-cert\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078699 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-metrics-certs\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078717 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvfg9\" (UniqueName: \"kubernetes.io/projected/773a5eef-2c13-4979-95c7-f2ad9edf9783-kube-api-access-cvfg9\") pod \"kube-storage-version-migrator-operator-b67b599dd-77mdt\" (UID: \"773a5eef-2c13-4979-95c7-f2ad9edf9783\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078734 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/773a5eef-2c13-4979-95c7-f2ad9edf9783-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-77mdt\" (UID: \"773a5eef-2c13-4979-95c7-f2ad9edf9783\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078770 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4v8w\" (UniqueName: \"kubernetes.io/projected/d69caa5d-484b-4415-8c05-61b9a2729dd7-kube-api-access-p4v8w\") pod \"olm-operator-6b444d44fb-9th2b\" (UID: \"d69caa5d-484b-4415-8c05-61b9a2729dd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078787 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9f9b3d-479f-4203-be65-5c71e7ee86f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bslfb\" (UID: \"fd9f9b3d-479f-4203-be65-5c71e7ee86f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078803 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82xhm\" (UniqueName: \"kubernetes.io/projected/6d6ecfb5-06aa-4f53-bc43-74d425b172db-kube-api-access-82xhm\") pod \"service-ca-operator-777779d784-t59j6\" (UID: \"6d6ecfb5-06aa-4f53-bc43-74d425b172db\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078828 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx9ff\" (UniqueName: \"kubernetes.io/projected/d8e32995-fe25-4df9-8f90-8ce9b3086686-kube-api-access-hx9ff\") pod \"ingress-canary-rctq2\" (UID: \"d8e32995-fe25-4df9-8f90-8ce9b3086686\") " pod="openshift-ingress-canary/ingress-canary-rctq2" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.078847 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be440828-8884-4cd3-b30e-4eba825caa3b-srv-cert\") pod \"catalog-operator-68c6474976-pv86t\" (UID: \"be440828-8884-4cd3-b30e-4eba825caa3b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:08:00 crc kubenswrapper[4917]: E1212 00:08:00.079184 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:00.579153939 +0000 UTC m=+115.356954752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.089412 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac4662d0-8501-4627-81b8-fdfffff90309-ca-trust-extracted\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.090561 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8caaf389-4a14-4f76-b767-5f70b6a2b14d-config\") pod \"kube-controller-manager-operator-78b949d7b-6286r\" (UID: \"8caaf389-4a14-4f76-b767-5f70b6a2b14d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.099694 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d69caa5d-484b-4415-8c05-61b9a2729dd7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9th2b\" (UID: \"d69caa5d-484b-4415-8c05-61b9a2729dd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.100250 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9e3540e9-d5bc-43cb-bc1b-28787574cd08-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nplkn\" (UID: \"9e3540e9-d5bc-43cb-bc1b-28787574cd08\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.100883 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-service-ca-bundle\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.105982 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9f9b3d-479f-4203-be65-5c71e7ee86f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bslfb\" (UID: \"fd9f9b3d-479f-4203-be65-5c71e7ee86f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.106170 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be440828-8884-4cd3-b30e-4eba825caa3b-srv-cert\") pod \"catalog-operator-68c6474976-pv86t\" (UID: \"be440828-8884-4cd3-b30e-4eba825caa3b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.110168 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/773a5eef-2c13-4979-95c7-f2ad9edf9783-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-77mdt\" (UID: \"773a5eef-2c13-4979-95c7-f2ad9edf9783\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.110843 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c70e2d16-2055-43e5-8d36-27d97a4c013e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ccxrr\" (UID: \"c70e2d16-2055-43e5-8d36-27d97a4c013e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.110910 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22813008-d948-4643-a6d2-688ce469bada-webhook-cert\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.121737 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac4662d0-8501-4627-81b8-fdfffff90309-installation-pull-secrets\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.122773 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-config\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.123083 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773a5eef-2c13-4979-95c7-f2ad9edf9783-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-77mdt\" (UID: \"773a5eef-2c13-4979-95c7-f2ad9edf9783\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.127630 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd9f9b3d-479f-4203-be65-5c71e7ee86f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bslfb\" (UID: \"fd9f9b3d-479f-4203-be65-5c71e7ee86f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.129553 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c70e2d16-2055-43e5-8d36-27d97a4c013e-config\") pod \"kube-apiserver-operator-766d6c64bb-ccxrr\" (UID: \"c70e2d16-2055-43e5-8d36-27d97a4c013e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.135356 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/22813008-d948-4643-a6d2-688ce469bada-tmpfs\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.135744 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6ecfb5-06aa-4f53-bc43-74d425b172db-config\") pod \"service-ca-operator-777779d784-t59j6\" (UID: \"6d6ecfb5-06aa-4f53-bc43-74d425b172db\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.141683 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-registry-tls\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.144268 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-etcd-ca\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.166534 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-stats-auth\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.167798 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22813008-d948-4643-a6d2-688ce469bada-apiservice-cert\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.168473 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-trusted-ca\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.170803 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d69caa5d-484b-4415-8c05-61b9a2729dd7-srv-cert\") pod \"olm-operator-6b444d44fb-9th2b\" (UID: \"d69caa5d-484b-4415-8c05-61b9a2729dd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.171337 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w7hp2"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.173104 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8caaf389-4a14-4f76-b767-5f70b6a2b14d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6286r\" (UID: \"8caaf389-4a14-4f76-b767-5f70b6a2b14d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.173463 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51ad4779-871d-4324-af65-41e72c202e37-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b9wvw\" (UID: \"51ad4779-871d-4324-af65-41e72c202e37\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.173622 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-metrics-certs\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.173753 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-default-certificate\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.174523 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-etcd-service-ca\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.174551 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-serving-cert\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.174948 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-registry-certificates\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.175488 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d6ecfb5-06aa-4f53-bc43-74d425b172db-serving-cert\") pod \"service-ca-operator-777779d784-t59j6\" (UID: \"6d6ecfb5-06aa-4f53-bc43-74d425b172db\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.175632 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-etcd-client\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.181926 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-registration-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.181989 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t96d\" (UniqueName: \"kubernetes.io/projected/16e61a0d-853c-4b17-b3c8-a12bed76808a-kube-api-access-9t96d\") pod \"machine-config-server-pjzsr\" (UID: \"16e61a0d-853c-4b17-b3c8-a12bed76808a\") " pod="openshift-machine-config-operator/machine-config-server-pjzsr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.182012 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-socket-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.182072 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx9ff\" (UniqueName: \"kubernetes.io/projected/d8e32995-fe25-4df9-8f90-8ce9b3086686-kube-api-access-hx9ff\") pod \"ingress-canary-rctq2\" (UID: \"d8e32995-fe25-4df9-8f90-8ce9b3086686\") " pod="openshift-ingress-canary/ingress-canary-rctq2" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.182116 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-csi-data-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.182141 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.182190 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/16e61a0d-853c-4b17-b3c8-a12bed76808a-certs\") pod \"machine-config-server-pjzsr\" (UID: \"16e61a0d-853c-4b17-b3c8-a12bed76808a\") " pod="openshift-machine-config-operator/machine-config-server-pjzsr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.182217 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-mountpoint-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.182265 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-plugins-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.182297 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8e32995-fe25-4df9-8f90-8ce9b3086686-cert\") pod \"ingress-canary-rctq2\" (UID: \"d8e32995-fe25-4df9-8f90-8ce9b3086686\") " pod="openshift-ingress-canary/ingress-canary-rctq2" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.182323 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/16e61a0d-853c-4b17-b3c8-a12bed76808a-node-bootstrap-token\") pod \"machine-config-server-pjzsr\" (UID: \"16e61a0d-853c-4b17-b3c8-a12bed76808a\") " pod="openshift-machine-config-operator/machine-config-server-pjzsr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.182347 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7657\" (UniqueName: \"kubernetes.io/projected/6e0d83f2-3607-4608-b778-92e9cc6eb572-kube-api-access-d7657\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.182839 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-registration-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.183031 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-socket-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.183145 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-csi-data-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.183408 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-plugins-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: E1212 00:08:00.183448 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:00.683430513 +0000 UTC m=+115.461231326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.183950 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llmm4\" (UniqueName: \"kubernetes.io/projected/1806fefb-c93c-4c0d-a7c8-dde659a77fbd-kube-api-access-llmm4\") pod \"etcd-operator-b45778765-fhrmw\" (UID: \"1806fefb-c93c-4c0d-a7c8-dde659a77fbd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.184098 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e0d83f2-3607-4608-b778-92e9cc6eb572-mountpoint-dir\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.184286 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.184428 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dzpcq"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.192246 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7h4l\" (UniqueName: \"kubernetes.io/projected/51ad4779-871d-4324-af65-41e72c202e37-kube-api-access-z7h4l\") pod \"package-server-manager-789f6589d5-b9wvw\" (UID: \"51ad4779-871d-4324-af65-41e72c202e37\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.192368 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/be440828-8884-4cd3-b30e-4eba825caa3b-profile-collector-cert\") pod \"catalog-operator-68c6474976-pv86t\" (UID: \"be440828-8884-4cd3-b30e-4eba825caa3b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.195297 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.212464 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvfg9\" (UniqueName: \"kubernetes.io/projected/773a5eef-2c13-4979-95c7-f2ad9edf9783-kube-api-access-cvfg9\") pod \"kube-storage-version-migrator-operator-b67b599dd-77mdt\" (UID: \"773a5eef-2c13-4979-95c7-f2ad9edf9783\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.213200 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8e32995-fe25-4df9-8f90-8ce9b3086686-cert\") pod \"ingress-canary-rctq2\" (UID: \"d8e32995-fe25-4df9-8f90-8ce9b3086686\") " pod="openshift-ingress-canary/ingress-canary-rctq2" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.213345 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/16e61a0d-853c-4b17-b3c8-a12bed76808a-node-bootstrap-token\") pod \"machine-config-server-pjzsr\" (UID: \"16e61a0d-853c-4b17-b3c8-a12bed76808a\") " pod="openshift-machine-config-operator/machine-config-server-pjzsr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.213609 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/16e61a0d-853c-4b17-b3c8-a12bed76808a-certs\") pod \"machine-config-server-pjzsr\" (UID: \"16e61a0d-853c-4b17-b3c8-a12bed76808a\") " pod="openshift-machine-config-operator/machine-config-server-pjzsr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.213904 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fkft\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-kube-api-access-8fkft\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.220606 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4v8w\" (UniqueName: \"kubernetes.io/projected/d69caa5d-484b-4415-8c05-61b9a2729dd7-kube-api-access-p4v8w\") pod \"olm-operator-6b444d44fb-9th2b\" (UID: \"d69caa5d-484b-4415-8c05-61b9a2729dd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.231279 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82xhm\" (UniqueName: \"kubernetes.io/projected/6d6ecfb5-06aa-4f53-bc43-74d425b172db-kube-api-access-82xhm\") pod \"service-ca-operator-777779d784-t59j6\" (UID: \"6d6ecfb5-06aa-4f53-bc43-74d425b172db\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.235590 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" event={"ID":"6e70d859-42e8-4d15-85be-8456028abbc5","Type":"ContainerStarted","Data":"4fa18978bece5f60ea0f05daba0a9bb78db30121ae66de897b9d3da103f01e6d"} Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.238807 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8caaf389-4a14-4f76-b767-5f70b6a2b14d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6286r\" (UID: \"8caaf389-4a14-4f76-b767-5f70b6a2b14d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.241504 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" event={"ID":"a8b3bec2-0c51-46d8-9b79-22f802b58962","Type":"ContainerStarted","Data":"10412b328a7398fba0bc02711974bdb262c2ce72747ab0c16ae44d2a87930ef7"} Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.242276 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.247525 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8frb7" event={"ID":"234c3156-bf4c-464d-8ee4-957474f3bb82","Type":"ContainerStarted","Data":"558876283cba1237d93c82a5d7ead6b32adf128105223347ce2894cdc0135b12"} Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.247602 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8frb7" event={"ID":"234c3156-bf4c-464d-8ee4-957474f3bb82","Type":"ContainerStarted","Data":"541caf057ee1af69d23982cec889b77f12afd5269dd14e3d712124cdcf9bc99c"} Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.265767 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.274496 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jgbs\" (UniqueName: \"kubernetes.io/projected/155eee0b-00b8-4e1e-9654-e811f70e9bdf-kube-api-access-8jgbs\") pod \"migrator-59844c95c7-b88pp\" (UID: \"155eee0b-00b8-4e1e-9654-e811f70e9bdf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.286614 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.287410 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" event={"ID":"e8edfd21-19a8-499c-b67b-93883083c239","Type":"ContainerStarted","Data":"04f9dfedf138790753700285970606125719bce630c96bfe4938c46230182d04"} Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.287480 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" event={"ID":"e8edfd21-19a8-499c-b67b-93883083c239","Type":"ContainerStarted","Data":"8cef71e1ae8a036f73b38cd16c343643101c87d436f007cf8d5e885452d013db"} Dec 12 00:08:00 crc kubenswrapper[4917]: E1212 00:08:00.291781 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:00.791741408 +0000 UTC m=+115.569542321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.299134 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-bound-sa-token\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.299583 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pzbrh"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.303699 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" event={"ID":"e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2","Type":"ContainerStarted","Data":"9cbb54cb6fd970905f0a623b69c73a20e290da4fcd001d802c151cd8f6060d24"} Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.313228 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" event={"ID":"c31d8f10-5195-45b7-9809-19edb34d404b","Type":"ContainerStarted","Data":"9b733cc20fa73f535259a24095a8e301e57bc0c118d0d0ec8ba9201ae1ff0152"} Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.313288 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" event={"ID":"c31d8f10-5195-45b7-9809-19edb34d404b","Type":"ContainerStarted","Data":"74af3995641cf5abda3383f7d209eb79e0c5367fd7b9c224ec915a84582d1d47"} Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.314097 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.321454 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-742lr" event={"ID":"dbe29ce3-49c3-466c-8e10-02d57dba74fa","Type":"ContainerStarted","Data":"e2e3fe49ccaf4cfec08fa9479c37e93b986242e2ac7d80ce33bed88deb462299"} Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.321504 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-742lr" event={"ID":"dbe29ce3-49c3-466c-8e10-02d57dba74fa","Type":"ContainerStarted","Data":"9d5f032482c6be365e1a7459a82616ff5fb38015f7234effc4a6ad397a265bd3"} Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.322219 4917 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-h6z88 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.322302 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" podUID="c31d8f10-5195-45b7-9809-19edb34d404b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.323526 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.333968 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shrgz\" (UniqueName: \"kubernetes.io/projected/22813008-d948-4643-a6d2-688ce469bada-kube-api-access-shrgz\") pod \"packageserver-d55dfcdfc-c84zc\" (UID: \"22813008-d948-4643-a6d2-688ce469bada\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.335326 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" event={"ID":"13fa6201-10ae-43a7-95d2-190b604c2594","Type":"ContainerStarted","Data":"d513e858af303bc6e90e9fd757ce1b3d1cf5b6990b13ab90e79d4581fbc7e3e2"} Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.336529 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9vhr\" (UniqueName: \"kubernetes.io/projected/9e3540e9-d5bc-43cb-bc1b-28787574cd08-kube-api-access-l9vhr\") pod \"multus-admission-controller-857f4d67dd-nplkn\" (UID: \"9e3540e9-d5bc-43cb-bc1b-28787574cd08\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.343997 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.348214 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd9f9b3d-479f-4203-be65-5c71e7ee86f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bslfb\" (UID: \"fd9f9b3d-479f-4203-be65-5c71e7ee86f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.354791 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-84jtz"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.365131 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-774zt\" (UniqueName: \"kubernetes.io/projected/be440828-8884-4cd3-b30e-4eba825caa3b-kube-api-access-774zt\") pod \"catalog-operator-68c6474976-pv86t\" (UID: \"be440828-8884-4cd3-b30e-4eba825caa3b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.384956 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c70e2d16-2055-43e5-8d36-27d97a4c013e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ccxrr\" (UID: \"c70e2d16-2055-43e5-8d36-27d97a4c013e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.388664 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: E1212 00:08:00.391061 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:00.891045151 +0000 UTC m=+115.668845964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.404055 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm4w2\" (UniqueName: \"kubernetes.io/projected/ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f-kube-api-access-mm4w2\") pod \"router-default-5444994796-thf67\" (UID: \"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f\") " pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.448932 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.449884 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7657\" (UniqueName: \"kubernetes.io/projected/6e0d83f2-3607-4608-b778-92e9cc6eb572-kube-api-access-d7657\") pod \"csi-hostpathplugin-fqm67\" (UID: \"6e0d83f2-3607-4608-b778-92e9cc6eb572\") " pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.464201 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.486248 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.486844 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t96d\" (UniqueName: \"kubernetes.io/projected/16e61a0d-853c-4b17-b3c8-a12bed76808a-kube-api-access-9t96d\") pod \"machine-config-server-pjzsr\" (UID: \"16e61a0d-853c-4b17-b3c8-a12bed76808a\") " pod="openshift-machine-config-operator/machine-config-server-pjzsr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.487123 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.492891 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:00 crc kubenswrapper[4917]: E1212 00:08:00.493023 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:00.993003888 +0000 UTC m=+115.770804701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.493236 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: E1212 00:08:00.493668 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:00.993632444 +0000 UTC m=+115.771433257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.494269 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.505025 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.516477 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx9ff\" (UniqueName: \"kubernetes.io/projected/d8e32995-fe25-4df9-8f90-8ce9b3086686-kube-api-access-hx9ff\") pod \"ingress-canary-rctq2\" (UID: \"d8e32995-fe25-4df9-8f90-8ce9b3086686\") " pod="openshift-ingress-canary/ingress-canary-rctq2" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.516882 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.520117 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29424960-ttvzz"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.529797 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.532428 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.535917 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.548822 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.563431 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.563512 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.585512 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.594158 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:00 crc kubenswrapper[4917]: E1212 00:08:00.594907 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:01.094875505 +0000 UTC m=+115.872676318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.598858 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fqm67" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.605852 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.607049 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rctq2" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.611583 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5dp8w"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.615118 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pjzsr" Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.646811 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd"] Dec 12 00:08:00 crc kubenswrapper[4917]: W1212 00:08:00.647584 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c4ee0d0_175d_436c_9161_2822246aacec.slice/crio-51ef050a493501d03d16680ae0435cab851eec2e45109f17376179aa7ffff7ae WatchSource:0}: Error finding container 51ef050a493501d03d16680ae0435cab851eec2e45109f17376179aa7ffff7ae: Status 404 returned error can't find the container with id 51ef050a493501d03d16680ae0435cab851eec2e45109f17376179aa7ffff7ae Dec 12 00:08:00 crc kubenswrapper[4917]: W1212 00:08:00.650333 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc845551a_a148_4bde_9808_f2d8d09c7616.slice/crio-2ebb483a9acd3d6f56520077616f8928392efe957bb5dd06de571c699a47b42b WatchSource:0}: Error finding container 2ebb483a9acd3d6f56520077616f8928392efe957bb5dd06de571c699a47b42b: Status 404 returned error can't find the container with id 2ebb483a9acd3d6f56520077616f8928392efe957bb5dd06de571c699a47b42b Dec 12 00:08:00 crc kubenswrapper[4917]: W1212 00:08:00.653840 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14f90559_e69f_445c_a61a_2b0f881abb72.slice/crio-0925a3eb14bef06fb5ed4c6a062d0ffb5e38d73a81b801bd220215c9d65c7948 WatchSource:0}: Error finding container 0925a3eb14bef06fb5ed4c6a062d0ffb5e38d73a81b801bd220215c9d65c7948: Status 404 returned error can't find the container with id 0925a3eb14bef06fb5ed4c6a062d0ffb5e38d73a81b801bd220215c9d65c7948 Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.662634 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vtz2p"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.703012 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: E1212 00:08:00.703922 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:01.203905846 +0000 UTC m=+115.981706659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.718871 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jctrq"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.756454 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8st8c"] Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.809595 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:00 crc kubenswrapper[4917]: E1212 00:08:00.810044 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:01.310016316 +0000 UTC m=+116.087817129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:00 crc kubenswrapper[4917]: W1212 00:08:00.843018 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38699438_1690_4770_9e6c_7f7033ed2fea.slice/crio-f645615c0662ceb67cfc1d21bcfa347e225a6bfdd6c35cca6f5bf0231708adad WatchSource:0}: Error finding container f645615c0662ceb67cfc1d21bcfa347e225a6bfdd6c35cca6f5bf0231708adad: Status 404 returned error can't find the container with id f645615c0662ceb67cfc1d21bcfa347e225a6bfdd6c35cca6f5bf0231708adad Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.913534 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:00 crc kubenswrapper[4917]: E1212 00:08:00.914311 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:01.414300131 +0000 UTC m=+116.192100934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:00 crc kubenswrapper[4917]: I1212 00:08:00.941616 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw"] Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.007289 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57"] Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.008093 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-742lr" Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.016718 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.030627 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:01.5305764 +0000 UTC m=+116.308377213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.037142 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf"] Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.039033 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z"] Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.046579 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r"] Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.046637 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt"] Dec 12 00:08:01 crc kubenswrapper[4917]: W1212 00:08:01.051143 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24e2c7bd_c682_49e7_942c_eb8afe865602.slice/crio-1f41a174d66efb3613adb30f67fce5b7df8bb03a8cf091862d23d285c59636ad WatchSource:0}: Error finding container 1f41a174d66efb3613adb30f67fce5b7df8bb03a8cf091862d23d285c59636ad: Status 404 returned error can't find the container with id 1f41a174d66efb3613adb30f67fce5b7df8bb03a8cf091862d23d285c59636ad Dec 12 00:08:01 crc kubenswrapper[4917]: W1212 00:08:01.089781 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89cc9c2e_68f5_4acd_bcaa_4dadcfb693b3.slice/crio-88b43a407404eab33867b3ae18df8666fd6351864a13339249977dd3ac8cc770 WatchSource:0}: Error finding container 88b43a407404eab33867b3ae18df8666fd6351864a13339249977dd3ac8cc770: Status 404 returned error can't find the container with id 88b43a407404eab33867b3ae18df8666fd6351864a13339249977dd3ac8cc770 Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.119416 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.119762 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:01.619748685 +0000 UTC m=+116.397549498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: W1212 00:08:01.161950 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51ad4779_871d_4324_af65_41e72c202e37.slice/crio-76c5172b950217577df02a68a3e11a2e5f8ff5193e5a07095f2dac4fd455f346 WatchSource:0}: Error finding container 76c5172b950217577df02a68a3e11a2e5f8ff5193e5a07095f2dac4fd455f346: Status 404 returned error can't find the container with id 76c5172b950217577df02a68a3e11a2e5f8ff5193e5a07095f2dac4fd455f346 Dec 12 00:08:01 crc kubenswrapper[4917]: W1212 00:08:01.166163 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda29599f5_a122_4891_a597_051ac247d945.slice/crio-65cbef75dc0902256e6ce72fd04d799aaec88efbdda9daa2cf8dce71d9753db0 WatchSource:0}: Error finding container 65cbef75dc0902256e6ce72fd04d799aaec88efbdda9daa2cf8dce71d9753db0: Status 404 returned error can't find the container with id 65cbef75dc0902256e6ce72fd04d799aaec88efbdda9daa2cf8dce71d9753db0 Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.170480 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp"] Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.219948 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.220382 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:01.72036721 +0000 UTC m=+116.498168023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.227824 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.228187 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:01.728175931 +0000 UTC m=+116.505976734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.274774 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nplkn"] Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.330462 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.331196 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:01.831175955 +0000 UTC m=+116.608976768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.362662 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr"] Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.383387 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" event={"ID":"14f90559-e69f-445c-a61a-2b0f881abb72","Type":"ContainerStarted","Data":"0925a3eb14bef06fb5ed4c6a062d0ffb5e38d73a81b801bd220215c9d65c7948"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.384498 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" event={"ID":"c845551a-a148-4bde-9808-f2d8d09c7616","Type":"ContainerStarted","Data":"2ebb483a9acd3d6f56520077616f8928392efe957bb5dd06de571c699a47b42b"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.388827 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pzbrh" event={"ID":"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2","Type":"ContainerStarted","Data":"836d8bcc9dd53c060ebf1b70fee7596235e421b87c5d65dfe614312de536c4bd"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.410724 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b"] Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.419015 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" event={"ID":"a29599f5-a122-4891-a597-051ac247d945","Type":"ContainerStarted","Data":"65cbef75dc0902256e6ce72fd04d799aaec88efbdda9daa2cf8dce71d9753db0"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.421719 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" event={"ID":"8848f4aa-3570-40a9-bd53-6e22cc7ed795","Type":"ContainerStarted","Data":"483fd5edef20680980fb39d76ba25dac29e8113bd993bff858c35a9ca6921e5c"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.430162 4917 generic.go:334] "Generic (PLEG): container finished" podID="a8b3bec2-0c51-46d8-9b79-22f802b58962" containerID="5251988b9c8bbe669e9b7c647902f3b4247a0c6f3403e3e13e87d19cf6e67d29" exitCode=0 Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.430267 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" event={"ID":"a8b3bec2-0c51-46d8-9b79-22f802b58962","Type":"ContainerDied","Data":"5251988b9c8bbe669e9b7c647902f3b4247a0c6f3403e3e13e87d19cf6e67d29"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.435113 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.435409 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:01.935396698 +0000 UTC m=+116.713197511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.444370 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" event={"ID":"818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5","Type":"ContainerStarted","Data":"af432ab40976ae632819c133867cce0c33090ff4a2c16eec0b6834437ef00a70"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.457909 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" event={"ID":"b06b8a36-c12a-4604-a017-277d9a6a18ff","Type":"ContainerStarted","Data":"acf61da027c27257a63913a75ab3cf6ee5878b019cadbe07775973087ed74d9a"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.457951 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" event={"ID":"b06b8a36-c12a-4604-a017-277d9a6a18ff","Type":"ContainerStarted","Data":"5e3990f8912c7656f92d8420b0414f6b8d4384517fbef2a43116ff2801c2a0f6"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.485899 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" event={"ID":"24e2c7bd-c682-49e7-942c-eb8afe865602","Type":"ContainerStarted","Data":"1f41a174d66efb3613adb30f67fce5b7df8bb03a8cf091862d23d285c59636ad"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.487097 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" event={"ID":"ae4dff24-ae34-4029-a0a1-30e9a379f091","Type":"ContainerStarted","Data":"6c37b8aed5a19d211205cb563c5c6aec72a3af534859869c18e9dcf46815d170"} Dec 12 00:08:01 crc kubenswrapper[4917]: W1212 00:08:01.497133 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc70e2d16_2055_43e5_8d36_27d97a4c013e.slice/crio-c47da23f20e03daeb55200c5f3b45a8f857f2c13fb80ef61fd4dcab57726e368 WatchSource:0}: Error finding container c47da23f20e03daeb55200c5f3b45a8f857f2c13fb80ef61fd4dcab57726e368: Status 404 returned error can't find the container with id c47da23f20e03daeb55200c5f3b45a8f857f2c13fb80ef61fd4dcab57726e368 Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.525804 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" event={"ID":"51ad4779-871d-4324-af65-41e72c202e37","Type":"ContainerStarted","Data":"76c5172b950217577df02a68a3e11a2e5f8ff5193e5a07095f2dac4fd455f346"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.538953 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.540694 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.040638348 +0000 UTC m=+116.818439201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.551128 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29424960-ttvzz" event={"ID":"fe29aecd-7402-47c3-a15a-d5a489c48b29","Type":"ContainerStarted","Data":"f89a9a156b06ae5d5c4acaf2390d3f4269e41aecb7d95526ecdd067cf13729f5"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.638683 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp" event={"ID":"155eee0b-00b8-4e1e-9654-e811f70e9bdf","Type":"ContainerStarted","Data":"6048d7086e767c60b6150a3d60aa67f2dad9ad85d75fce2f4757bdd45beb2ecc"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.638719 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jctrq" event={"ID":"f3af21ad-4bfb-4640-9589-c46313fb2379","Type":"ContainerStarted","Data":"3993b28d7c9f2d330d999ba4b1c3d487d8e066d5bd98d980edb7ed7820a3dc63"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.641223 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.641737 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.141719854 +0000 UTC m=+116.919520667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.655350 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" event={"ID":"8caaf389-4a14-4f76-b767-5f70b6a2b14d","Type":"ContainerStarted","Data":"8a9048e321984a899058054ebeff463c25d9e281d89b0c1e01f04364a6d80f7c"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.690718 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" event={"ID":"e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2","Type":"ContainerStarted","Data":"8e07609f5aa9604a2a208178e5567c7cb191562207854d2e7192dfa7525f0153"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.694371 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" event={"ID":"5c4ee0d0-175d-436c-9161-2822246aacec","Type":"ContainerStarted","Data":"51ef050a493501d03d16680ae0435cab851eec2e45109f17376179aa7ffff7ae"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.724032 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" event={"ID":"773a5eef-2c13-4979-95c7-f2ad9edf9783","Type":"ContainerStarted","Data":"9aa36fd569adf8101e544827658b557979fbbc83066d6097d22bef9e815865ca"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.745532 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.746214 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.246188973 +0000 UTC m=+117.023989786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.759630 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-thf67" event={"ID":"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f","Type":"ContainerStarted","Data":"27670f07f208204b5322783f4729d6b86e077ffa05e82c7987fa845eb115fd40"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.782052 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb"] Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.785529 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" event={"ID":"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3","Type":"ContainerStarted","Data":"88b43a407404eab33867b3ae18df8666fd6351864a13339249977dd3ac8cc770"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.786061 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fhrmw"] Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.807260 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z" event={"ID":"9ba57102-f6a5-41ef-a83c-951795076ab5","Type":"ContainerStarted","Data":"4005151fd13609f6eda51e404d4a5d54d407e29eb7eb9aa611fef468883fc631"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.831100 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" event={"ID":"13fa6201-10ae-43a7-95d2-190b604c2594","Type":"ContainerStarted","Data":"69672786c1ee6d13c6ff438acb0287731589ee2e81414ca3d2311849a6b55966"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.848613 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.849000 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.348984242 +0000 UTC m=+117.126785055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: W1212 00:08:01.890603 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd9f9b3d_479f_4203_be65_5c71e7ee86f7.slice/crio-cadb30bf40a3584a3c216044b8cfaf72e9536d6d582a00c6cb990f775e3d53c2 WatchSource:0}: Error finding container cadb30bf40a3584a3c216044b8cfaf72e9536d6d582a00c6cb990f775e3d53c2: Status 404 returned error can't find the container with id cadb30bf40a3584a3c216044b8cfaf72e9536d6d582a00c6cb990f775e3d53c2 Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.890838 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" event={"ID":"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e","Type":"ContainerStarted","Data":"0ddcf91441324d0d6c01d87c3be823672ce626c4bccc115bdd14a456637818b6"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.890878 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" event={"ID":"09fd0f32-0d7e-48c7-a673-ca45f8db8e8e","Type":"ContainerStarted","Data":"35594b594ad491dc1eb8bddd8e3205573fa12e986c24917a6cdfb545c856ac18"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.956958 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.957680 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.457626834 +0000 UTC m=+117.235427647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.957868 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:01 crc kubenswrapper[4917]: E1212 00:08:01.959239 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.459230394 +0000 UTC m=+117.237031207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.967379 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" event={"ID":"4b029a79-9426-4acc-af09-c11c8216777c","Type":"ContainerStarted","Data":"ba457d469c2336538d0dc57bad52c11dd265182877ca05820e03b67b3b25d87f"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.970173 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.987222 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" event={"ID":"a189923c-a229-479b-8afa-12d4fe112a83","Type":"ContainerStarted","Data":"c27a0dbdb4455aa0000155cabae20dfccbfc824dfae8645a826cd4ac04d0cf5e"} Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.995662 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8frb7" podStartSLOduration=92.995616435 podStartE2EDuration="1m32.995616435s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:01.973577675 +0000 UTC m=+116.751378498" watchObservedRunningTime="2025-12-12 00:08:01.995616435 +0000 UTC m=+116.773417248" Dec 12 00:08:01 crc kubenswrapper[4917]: I1212 00:08:01.998145 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pjzsr" event={"ID":"16e61a0d-853c-4b17-b3c8-a12bed76808a","Type":"ContainerStarted","Data":"2b1d55f7e83bb8149175c22a1ae8f81fc77654c4c1af094c950add56eb96cafc"} Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.008065 4917 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t8zct container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.008164 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" podUID="4b029a79-9426-4acc-af09-c11c8216777c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.043032 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" event={"ID":"38699438-1690-4770-9e6c-7f7033ed2fea","Type":"ContainerStarted","Data":"f645615c0662ceb67cfc1d21bcfa347e225a6bfdd6c35cca6f5bf0231708adad"} Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.060565 4917 generic.go:334] "Generic (PLEG): container finished" podID="6e70d859-42e8-4d15-85be-8456028abbc5" containerID="7b50442cf5feb42bb4c8e57c805ee5ad1376720afbfb897097974ab31948b27e" exitCode=0 Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.061705 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" podStartSLOduration=93.061679704 podStartE2EDuration="1m33.061679704s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:02.008318597 +0000 UTC m=+116.786119420" watchObservedRunningTime="2025-12-12 00:08:02.061679704 +0000 UTC m=+116.839480527" Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.065732 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" event={"ID":"6e70d859-42e8-4d15-85be-8456028abbc5","Type":"ContainerDied","Data":"7b50442cf5feb42bb4c8e57c805ee5ad1376720afbfb897097974ab31948b27e"} Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.096108 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.096347 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.596332262 +0000 UTC m=+117.374133075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.099434 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dzpcq" podStartSLOduration=93.099399478 podStartE2EDuration="1m33.099399478s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:02.088130662 +0000 UTC m=+116.865931475" watchObservedRunningTime="2025-12-12 00:08:02.099399478 +0000 UTC m=+116.877200291" Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.101048 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.108189 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.608156342 +0000 UTC m=+117.385957155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.108549 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.138423 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t"] Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.140533 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-742lr" podStartSLOduration=93.140518665 podStartE2EDuration="1m33.140518665s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:02.132905419 +0000 UTC m=+116.910706232" watchObservedRunningTime="2025-12-12 00:08:02.140518665 +0000 UTC m=+116.918319478" Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.148344 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc"] Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.174583 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t9t5t" podStartSLOduration=93.174551439 podStartE2EDuration="1m33.174551439s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:02.168194714 +0000 UTC m=+116.945995547" watchObservedRunningTime="2025-12-12 00:08:02.174551439 +0000 UTC m=+116.952352262" Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.208890 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.209439 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.709389782 +0000 UTC m=+117.487190595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.257515 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-97xnz" podStartSLOduration=93.257488782 podStartE2EDuration="1m33.257488782s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:02.218175137 +0000 UTC m=+116.995975950" watchObservedRunningTime="2025-12-12 00:08:02.257488782 +0000 UTC m=+117.035289605" Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.280461 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t59j6"] Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.291018 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fqm67"] Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.311833 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.312355 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.812335395 +0000 UTC m=+117.590136278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.326406 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rctq2"] Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.368244 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" podStartSLOduration=93.368220845 podStartE2EDuration="1m33.368220845s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:02.362129575 +0000 UTC m=+117.139930388" watchObservedRunningTime="2025-12-12 00:08:02.368220845 +0000 UTC m=+117.146021658" Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.413302 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.413457 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.913433432 +0000 UTC m=+117.691234245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.413567 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.413987 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:02.913962825 +0000 UTC m=+117.691763718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.514695 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.514878 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.014848846 +0000 UTC m=+117.792649659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.515051 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.515534 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.015527083 +0000 UTC m=+117.793327896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.615827 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.615950 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.115926314 +0000 UTC m=+117.893727137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.616149 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.616536 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.116522518 +0000 UTC m=+117.894323331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.716692 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.716914 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.216891207 +0000 UTC m=+117.994692020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.717034 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.717515 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.217504252 +0000 UTC m=+117.995305155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.817761 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.818043 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.318029925 +0000 UTC m=+118.095830738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:02 crc kubenswrapper[4917]: I1212 00:08:02.919829 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:02 crc kubenswrapper[4917]: E1212 00:08:02.920190 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.420172008 +0000 UTC m=+118.197972811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.021288 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.021537 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.52148485 +0000 UTC m=+118.299285663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.021917 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.022444 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.522409923 +0000 UTC m=+118.300210736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.066902 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29424960-ttvzz" event={"ID":"fe29aecd-7402-47c3-a15a-d5a489c48b29","Type":"ContainerStarted","Data":"e3f6d5e6097b505eb363fe2264106b8e938bde6586131165f84667e31eb15b39"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.068790 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" event={"ID":"9e3540e9-d5bc-43cb-bc1b-28787574cd08","Type":"ContainerStarted","Data":"bef0e1e114c61b293dd10f78ef403250840b8db59f4dd2b9c69ff3d6bf0c7f86"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.070456 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" event={"ID":"ae4dff24-ae34-4029-a0a1-30e9a379f091","Type":"ContainerStarted","Data":"71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.071277 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.073254 4917 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5dp8w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.073403 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.073377 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" event={"ID":"4b029a79-9426-4acc-af09-c11c8216777c","Type":"ContainerStarted","Data":"018aaddea4c628f1a7d3769b1562c602faf9c6f7433d4886a4677a851f1672eb"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.074191 4917 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t8zct container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.074235 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" podUID="4b029a79-9426-4acc-af09-c11c8216777c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.075057 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" event={"ID":"818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5","Type":"ContainerStarted","Data":"d43d967efd75ed275581fd83e8c69c1268539dece23b49ef8695ab835e31d833"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.076464 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" event={"ID":"24e2c7bd-c682-49e7-942c-eb8afe865602","Type":"ContainerStarted","Data":"1d9a945b1a23830a725b1a6607875516e1dd5e297fb2cf0ffe345ee4645c5a36"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.077132 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.078263 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" event={"ID":"c70e2d16-2055-43e5-8d36-27d97a4c013e","Type":"ContainerStarted","Data":"c47da23f20e03daeb55200c5f3b45a8f857f2c13fb80ef61fd4dcab57726e368"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.078449 4917 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8st8c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.078530 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" podUID="24e2c7bd-c682-49e7-942c-eb8afe865602" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.080297 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" event={"ID":"fd9f9b3d-479f-4203-be65-5c71e7ee86f7","Type":"ContainerStarted","Data":"cadb30bf40a3584a3c216044b8cfaf72e9536d6d582a00c6cb990f775e3d53c2"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.081339 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" event={"ID":"d69caa5d-484b-4415-8c05-61b9a2729dd7","Type":"ContainerStarted","Data":"b6643676b8e06c63e88c4f0805fb927d0b15ae0faad5e7f411933c9d48a12525"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.083557 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" event={"ID":"8848f4aa-3570-40a9-bd53-6e22cc7ed795","Type":"ContainerStarted","Data":"b94bc81a907c29bed2a5043f8ebea6dc9745e492ccdeead6f977401587320964"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.086963 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29424960-ttvzz" podStartSLOduration=94.086945304 podStartE2EDuration="1m34.086945304s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:03.082237979 +0000 UTC m=+117.860038802" watchObservedRunningTime="2025-12-12 00:08:03.086945304 +0000 UTC m=+117.864746127" Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.087337 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" event={"ID":"1806fefb-c93c-4c0d-a7c8-dde659a77fbd","Type":"ContainerStarted","Data":"3721fae89564a5eb46f2a1e970f0a5acadb88f1e9a3895b3ca504167814eacb3"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.090994 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pzbrh" event={"ID":"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2","Type":"ContainerStarted","Data":"7ea3fa080bdf2625caad6724d8a6e19b086eac2ecd90f13909740c22399af30d"} Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.103221 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" podStartSLOduration=94.103203493 podStartE2EDuration="1m34.103203493s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:03.102215058 +0000 UTC m=+117.880015881" watchObservedRunningTime="2025-12-12 00:08:03.103203493 +0000 UTC m=+117.881004306" Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.121122 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" podStartSLOduration=94.121103671 podStartE2EDuration="1m34.121103671s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:03.120375693 +0000 UTC m=+117.898176516" watchObservedRunningTime="2025-12-12 00:08:03.121103671 +0000 UTC m=+117.898904494" Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.123083 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.123159 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.623145371 +0000 UTC m=+118.400946184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.134392 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.138957 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.638925117 +0000 UTC m=+118.416725930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.140512 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdfvr" podStartSLOduration=94.140491496 podStartE2EDuration="1m34.140491496s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:03.137994664 +0000 UTC m=+117.915795477" watchObservedRunningTime="2025-12-12 00:08:03.140491496 +0000 UTC m=+117.918292319" Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.235518 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.235938 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.735921424 +0000 UTC m=+118.513722237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.337097 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.337916 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.837902952 +0000 UTC m=+118.615703765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.440843 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.441284 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:03.941240245 +0000 UTC m=+118.719041238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.543369 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.543845 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:04.043827368 +0000 UTC m=+118.821628271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.644073 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.644446 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:04.144394382 +0000 UTC m=+118.922195245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.745537 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.746176 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:04.246151396 +0000 UTC m=+119.023952399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.846836 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.847586 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:04.34757044 +0000 UTC m=+119.125371253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:03 crc kubenswrapper[4917]: I1212 00:08:03.949401 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:03 crc kubenswrapper[4917]: E1212 00:08:03.950048 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:04.45003307 +0000 UTC m=+119.227833883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.051482 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:04 crc kubenswrapper[4917]: E1212 00:08:04.052267 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:04.552236694 +0000 UTC m=+119.330037517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.153581 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:04 crc kubenswrapper[4917]: E1212 00:08:04.154681 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:04.654638834 +0000 UTC m=+119.432439647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.249566 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" event={"ID":"c70e2d16-2055-43e5-8d36-27d97a4c013e","Type":"ContainerStarted","Data":"89cf0587916e71c3b7540a401aa3e22dc1fe9fd8e6bc4971872d8cc1e99b3833"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.257408 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:04 crc kubenswrapper[4917]: E1212 00:08:04.257881 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:04.757860262 +0000 UTC m=+119.535661075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.279914 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pjzsr" event={"ID":"16e61a0d-853c-4b17-b3c8-a12bed76808a","Type":"ContainerStarted","Data":"57b3bb8f3cd86770b067f33088f43acfe0a8c22de826d9556ae615af07407d97"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.300266 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" event={"ID":"a8b3bec2-0c51-46d8-9b79-22f802b58962","Type":"ContainerStarted","Data":"21a6804e1df3f97985cf36ba4952ea0e9d3bd49d61b5c9b7c48e95c3f70dfca7"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.313505 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccxrr" podStartSLOduration=95.313481205 podStartE2EDuration="1m35.313481205s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.303661965 +0000 UTC m=+119.081462788" watchObservedRunningTime="2025-12-12 00:08:04.313481205 +0000 UTC m=+119.091282048" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.329108 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" event={"ID":"8caaf389-4a14-4f76-b767-5f70b6a2b14d","Type":"ContainerStarted","Data":"77b484477d6b2a756b22db822d087ba2ff524509ffcc4b155280c811f277dc74"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.370776 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6286r" podStartSLOduration=95.370756179 podStartE2EDuration="1m35.370756179s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.370271877 +0000 UTC m=+119.148072700" watchObservedRunningTime="2025-12-12 00:08:04.370756179 +0000 UTC m=+119.148556992" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.371958 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.372562 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-pjzsr" podStartSLOduration=7.372554512 podStartE2EDuration="7.372554512s" podCreationTimestamp="2025-12-12 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.339062832 +0000 UTC m=+119.116863665" watchObservedRunningTime="2025-12-12 00:08:04.372554512 +0000 UTC m=+119.150355325" Dec 12 00:08:04 crc kubenswrapper[4917]: E1212 00:08:04.373470 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:04.873446975 +0000 UTC m=+119.651247998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.427227 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" event={"ID":"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3","Type":"ContainerStarted","Data":"e639bce05b0759566537525b1967ee86906aed39ffde8452dc3a683660a8fd9e"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.462420 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z" event={"ID":"9ba57102-f6a5-41ef-a83c-951795076ab5","Type":"ContainerStarted","Data":"ef4e805cea509b4aba88df7fe69c3e33bc740352ee0e52f0daa88ebae712eeea"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.470335 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jctrq" event={"ID":"f3af21ad-4bfb-4640-9589-c46313fb2379","Type":"ContainerStarted","Data":"11d04ef42a34a077fdab81583a22bb8d1273fd2b31e766d7b79f7a35c1ae3c9d"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.471696 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jctrq" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.472634 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:04 crc kubenswrapper[4917]: E1212 00:08:04.473464 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:04.973437174 +0000 UTC m=+119.751237987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.478141 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" event={"ID":"e42e141a-1daa-49d0-b4f0-7b6fdc9ea9c2","Type":"ContainerStarted","Data":"5c17566d08049f180a8ff321151e25884dc0aedd448a2fc1a988103383ea6b15"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.480289 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-thf67" event={"ID":"ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f","Type":"ContainerStarted","Data":"76183d7d0785750fe95156667488933ec205915f5143633d3f99a3ee4e579d8e"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.483401 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" event={"ID":"b06b8a36-c12a-4604-a017-277d9a6a18ff","Type":"ContainerStarted","Data":"ca2c011dacc12737948644b431c038247e8e94389ce2afc7c440cf62c7136354"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.486035 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rctq2" event={"ID":"d8e32995-fe25-4df9-8f90-8ce9b3086686","Type":"ContainerStarted","Data":"271b2c16fe1d10d459c6383a980e3fa5a74c44d7976c3ee771bdaca1d3ef3106"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.494930 4917 patch_prober.go:28] interesting pod/downloads-7954f5f757-jctrq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.494965 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jctrq" podUID="f3af21ad-4bfb-4640-9589-c46313fb2379" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.532769 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" event={"ID":"c845551a-a148-4bde-9808-f2d8d09c7616","Type":"ContainerStarted","Data":"9a483eefb6e700060ae4b56b067b51445c9941d425b6d662b0fc5650347cd093"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.536814 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.548780 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.548846 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.550635 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tn6z" podStartSLOduration=95.550599315 podStartE2EDuration="1m35.550599315s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.54918757 +0000 UTC m=+119.326988393" watchObservedRunningTime="2025-12-12 00:08:04.550599315 +0000 UTC m=+119.328400128" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.555768 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" event={"ID":"773a5eef-2c13-4979-95c7-f2ad9edf9783","Type":"ContainerStarted","Data":"b0c6a26b8b78385b6bf814738c10b7b497381cd7b5ff004d322730d9a48d01bc"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.558151 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" event={"ID":"5c4ee0d0-175d-436c-9161-2822246aacec","Type":"ContainerStarted","Data":"6fee1af40784151c459d53bc5c9c41f32cbb0868b36b60de3fb433bc2272b9d3"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.575197 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:04 crc kubenswrapper[4917]: E1212 00:08:04.575615 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:05.075604168 +0000 UTC m=+119.853404981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.607730 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp" event={"ID":"155eee0b-00b8-4e1e-9654-e811f70e9bdf","Type":"ContainerStarted","Data":"a76692d283eb57648a6b222352d0cdad6cf7f089efdc00548a1f22721ed280fc"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.618262 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jctrq" podStartSLOduration=95.618245223 podStartE2EDuration="1m35.618245223s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.574465899 +0000 UTC m=+119.352266702" watchObservedRunningTime="2025-12-12 00:08:04.618245223 +0000 UTC m=+119.396046036" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.619684 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-84jtz" podStartSLOduration=95.619677798 podStartE2EDuration="1m35.619677798s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.615090305 +0000 UTC m=+119.392891118" watchObservedRunningTime="2025-12-12 00:08:04.619677798 +0000 UTC m=+119.397478611" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.647772 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxz2" podStartSLOduration=95.647744495 podStartE2EDuration="1m35.647744495s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.644117716 +0000 UTC m=+119.421918529" watchObservedRunningTime="2025-12-12 00:08:04.647744495 +0000 UTC m=+119.425545338" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.678815 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:04 crc kubenswrapper[4917]: E1212 00:08:04.680811 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:05.180789904 +0000 UTC m=+119.958590717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.692157 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" event={"ID":"be440828-8884-4cd3-b30e-4eba825caa3b","Type":"ContainerStarted","Data":"3e7663f1a89abafa0cd8c3919ba4e7977ed5817889337e306c158c9b1cce7d45"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.693371 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.719478 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fqm67" event={"ID":"6e0d83f2-3607-4608-b778-92e9cc6eb572","Type":"ContainerStarted","Data":"a1637bf8e206db06b87d1a0b888004621c369be6208ab28cd36fec548ea91801"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.725196 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-thf67" podStartSLOduration=95.725173322 podStartE2EDuration="1m35.725173322s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.724406883 +0000 UTC m=+119.502207696" watchObservedRunningTime="2025-12-12 00:08:04.725173322 +0000 UTC m=+119.502974135" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.744717 4917 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pv86t container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.744781 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" podUID="be440828-8884-4cd3-b30e-4eba825caa3b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.750524 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.767126 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" event={"ID":"6d6ecfb5-06aa-4f53-bc43-74d425b172db","Type":"ContainerStarted","Data":"dc30b5c4f7fe3a2855b8f1d62243155110196696b73010f53afa394194f533af"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.782175 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:04 crc kubenswrapper[4917]: E1212 00:08:04.782560 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:05.282543748 +0000 UTC m=+120.060344561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.787722 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rctq2" podStartSLOduration=7.787702064 podStartE2EDuration="7.787702064s" podCreationTimestamp="2025-12-12 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.786941615 +0000 UTC m=+119.564742428" watchObservedRunningTime="2025-12-12 00:08:04.787702064 +0000 UTC m=+119.565502877" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.787949 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-w7hp2" podStartSLOduration=95.78794381 podStartE2EDuration="1m35.78794381s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.76099688 +0000 UTC m=+119.538797703" watchObservedRunningTime="2025-12-12 00:08:04.78794381 +0000 UTC m=+119.565744623" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.794819 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" event={"ID":"9e3540e9-d5bc-43cb-bc1b-28787574cd08","Type":"ContainerStarted","Data":"085e4f93d62c1f13859206a6bedbad78be2522dadaca4c06900da7da7e363ea2"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.832820 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.838679 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" event={"ID":"a29599f5-a122-4891-a597-051ac247d945","Type":"ContainerStarted","Data":"fe13922b2a47cf4562d112f021dad7f9707ce57079b9b51ea467fdcdff753153"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.838936 4917 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-9th2b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.838966 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" podUID="d69caa5d-484b-4415-8c05-61b9a2729dd7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.857965 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" podStartSLOduration=95.857945945 podStartE2EDuration="1m35.857945945s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.819526203 +0000 UTC m=+119.597327026" watchObservedRunningTime="2025-12-12 00:08:04.857945945 +0000 UTC m=+119.635746758" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.859466 4917 generic.go:334] "Generic (PLEG): container finished" podID="14f90559-e69f-445c-a61a-2b0f881abb72" containerID="39fc84e35d50a57d3bb348bbd0effcf02404dfb33d5cef56f4cd832483dd3d28" exitCode=0 Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.859544 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" event={"ID":"14f90559-e69f-445c-a61a-2b0f881abb72","Type":"ContainerDied","Data":"39fc84e35d50a57d3bb348bbd0effcf02404dfb33d5cef56f4cd832483dd3d28"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.883283 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:04 crc kubenswrapper[4917]: E1212 00:08:04.883567 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:05.383531192 +0000 UTC m=+120.161331995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.883694 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" event={"ID":"51ad4779-871d-4324-af65-41e72c202e37","Type":"ContainerStarted","Data":"d2ab059b8756c3d0dac26701d3a8b6a1be6c4b5553064e81807964300ddfa578"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.892103 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" podStartSLOduration=95.892085742 podStartE2EDuration="1m35.892085742s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.890471502 +0000 UTC m=+119.668272315" watchObservedRunningTime="2025-12-12 00:08:04.892085742 +0000 UTC m=+119.669886555" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.892331 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" podStartSLOduration=95.892327387 podStartE2EDuration="1m35.892327387s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.858478708 +0000 UTC m=+119.636279521" watchObservedRunningTime="2025-12-12 00:08:04.892327387 +0000 UTC m=+119.670128200" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.901730 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" event={"ID":"a189923c-a229-479b-8afa-12d4fe112a83","Type":"ContainerStarted","Data":"38f18c8c3a96cd6db1a2227647d4478cda8861cc394ac266eb7c40c32f628013"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.905116 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" event={"ID":"38699438-1690-4770-9e6c-7f7033ed2fea","Type":"ContainerStarted","Data":"a3ae160b037fd69f870a5e16857fef93f8c44ccbf316fa72425370d5d4018e6c"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.916034 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" event={"ID":"22813008-d948-4643-a6d2-688ce469bada","Type":"ContainerStarted","Data":"3e78efcdff04bccda27789cca8f617b40a698a8aa21a61799be83f112effe223"} Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.916084 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.917927 4917 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8st8c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.917967 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" podUID="24e2c7bd-c682-49e7-942c-eb8afe865602" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.944808 4917 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-c84zc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.944874 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" podUID="22813008-d948-4643-a6d2-688ce469bada" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.945237 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.945550 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.947576 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77mdt" podStartSLOduration=95.947541191 podStartE2EDuration="1m35.947541191s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:04.945379747 +0000 UTC m=+119.723180560" watchObservedRunningTime="2025-12-12 00:08:04.947541191 +0000 UTC m=+119.725342004" Dec 12 00:08:04 crc kubenswrapper[4917]: I1212 00:08:04.985320 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:04 crc kubenswrapper[4917]: E1212 00:08:04.985688 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:05.485674325 +0000 UTC m=+120.263475138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.074867 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" podStartSLOduration=96.07484135 podStartE2EDuration="1m36.07484135s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:05.003307067 +0000 UTC m=+119.781107870" watchObservedRunningTime="2025-12-12 00:08:05.07484135 +0000 UTC m=+119.852642183" Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.086662 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:05 crc kubenswrapper[4917]: E1212 00:08:05.087495 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:05.587481149 +0000 UTC m=+120.365281962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.131484 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vtz2p" podStartSLOduration=96.131458847 podStartE2EDuration="1m36.131458847s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:05.12504398 +0000 UTC m=+119.902844793" watchObservedRunningTime="2025-12-12 00:08:05.131458847 +0000 UTC m=+119.909259660" Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.189620 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:05 crc kubenswrapper[4917]: E1212 00:08:05.190036 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:05.690016461 +0000 UTC m=+120.467817274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.219169 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" podStartSLOduration=96.219144315 podStartE2EDuration="1m36.219144315s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:05.216292466 +0000 UTC m=+119.994093299" watchObservedRunningTime="2025-12-12 00:08:05.219144315 +0000 UTC m=+119.996945128" Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.249564 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" podStartSLOduration=96.24954001 podStartE2EDuration="1m36.24954001s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:05.245604614 +0000 UTC m=+120.023405427" watchObservedRunningTime="2025-12-12 00:08:05.24954001 +0000 UTC m=+120.027340833" Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.293270 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:05 crc kubenswrapper[4917]: E1212 00:08:05.293680 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:05.79364087 +0000 UTC m=+120.571441683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.394795 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:05 crc kubenswrapper[4917]: E1212 00:08:05.395183 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:05.895167478 +0000 UTC m=+120.672968291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.495912 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:05 crc kubenswrapper[4917]: E1212 00:08:05.496130 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:05.996095561 +0000 UTC m=+120.773896374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.496966 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:05 crc kubenswrapper[4917]: E1212 00:08:05.497390 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:05.997371862 +0000 UTC m=+120.775172675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.542072 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:05 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:05 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:05 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.542163 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.598098 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:05 crc kubenswrapper[4917]: E1212 00:08:05.598819 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:06.098794677 +0000 UTC m=+120.876595490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.721181 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:05 crc kubenswrapper[4917]: E1212 00:08:05.722408 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:06.222388665 +0000 UTC m=+121.000189478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.826608 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:05 crc kubenswrapper[4917]: E1212 00:08:05.827021 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:06.327002958 +0000 UTC m=+121.104803771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.929251 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:05 crc kubenswrapper[4917]: E1212 00:08:05.929526 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:06.42951449 +0000 UTC m=+121.207315303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.967279 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" event={"ID":"22813008-d948-4643-a6d2-688ce469bada","Type":"ContainerStarted","Data":"e826db3e7d57d69acffbb1fa2642eed93e8601c3a79abd1f01e02503530751b9"} Dec 12 00:08:05 crc kubenswrapper[4917]: I1212 00:08:05.979931 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp" event={"ID":"155eee0b-00b8-4e1e-9654-e811f70e9bdf","Type":"ContainerStarted","Data":"2ec4907c0e39a009fe2194faa9df356853c4c79c715a015900a3bf95457ed4bb"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.009180 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" event={"ID":"a189923c-a229-479b-8afa-12d4fe112a83","Type":"ContainerStarted","Data":"d2c14d2494e1b41af7d04c94360d47f8d86a3e817b50c9a5cd501a2c13e3a780"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.027911 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" event={"ID":"be440828-8884-4cd3-b30e-4eba825caa3b","Type":"ContainerStarted","Data":"d800edddcbfb8a4a85e58f6f2d4ff5a60829867571a943b63a9862fe3b56797f"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.030105 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:06 crc kubenswrapper[4917]: E1212 00:08:06.031011 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:06.530995606 +0000 UTC m=+121.308796419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.050907 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rctq2" event={"ID":"d8e32995-fe25-4df9-8f90-8ce9b3086686","Type":"ContainerStarted","Data":"994188b37ff255b5080a87e79a47181a4983a281f5f76633c2ce00ee7780a533"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.090710 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkfhd" podStartSLOduration=97.090689318 podStartE2EDuration="1m37.090689318s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:06.088543466 +0000 UTC m=+120.866344279" watchObservedRunningTime="2025-12-12 00:08:06.090689318 +0000 UTC m=+120.868490131" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.091703 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b88pp" podStartSLOduration=97.091697453 podStartE2EDuration="1m37.091697453s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:06.05194044 +0000 UTC m=+120.829741263" watchObservedRunningTime="2025-12-12 00:08:06.091697453 +0000 UTC m=+120.869498266" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.092030 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" event={"ID":"818a4f2a-783f-4c5f-b119-a6a4a5a3b8f5","Type":"ContainerStarted","Data":"00dfa08f4d5aa8f432934c7afbfcf7d31d3bd55bd98992fcae291909c8771062"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.103835 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" event={"ID":"9e3540e9-d5bc-43cb-bc1b-28787574cd08","Type":"ContainerStarted","Data":"1b63b73e4ca64a27e8f47a777a55c69474eb73024f9d2f1f513585b30b45a44e"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.113081 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" event={"ID":"fd9f9b3d-479f-4203-be65-5c71e7ee86f7","Type":"ContainerStarted","Data":"359637db4dbead25adfbd9c7232350ccf1b0d8d5e0eeaf11948e6f7b4b1e1830"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.121683 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.131455 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:06 crc kubenswrapper[4917]: E1212 00:08:06.132036 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:06.632020842 +0000 UTC m=+121.409821735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.135296 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bq5kv" podStartSLOduration=97.135278471 podStartE2EDuration="1m37.135278471s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:06.133070167 +0000 UTC m=+120.910870980" watchObservedRunningTime="2025-12-12 00:08:06.135278471 +0000 UTC m=+120.913079284" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.146539 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fqm67" event={"ID":"6e0d83f2-3607-4608-b778-92e9cc6eb572","Type":"ContainerStarted","Data":"68cab4c4b82a961097663ab49110974268ed52bde5a99c0527d41964d5022df2"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.175838 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bslfb" podStartSLOduration=97.175818324 podStartE2EDuration="1m37.175818324s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:06.174111633 +0000 UTC m=+120.951912446" watchObservedRunningTime="2025-12-12 00:08:06.175818324 +0000 UTC m=+120.953619137" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.188196 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" event={"ID":"51ad4779-871d-4324-af65-41e72c202e37","Type":"ContainerStarted","Data":"1caea33cd85b79d33e0d66e00f4c0507b0c5d02cb418a78e4fd69d56cff4c1ea"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.188877 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.233132 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.233926 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" event={"ID":"89cc9c2e-68f5-4acd-bcaa-4dadcfb693b3","Type":"ContainerStarted","Data":"5809bdb4808bc9be1bfd6fb4a90e859bce335091fa9fe111d3271e230be3786b"} Dec 12 00:08:06 crc kubenswrapper[4917]: E1212 00:08:06.234128 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:06.734111503 +0000 UTC m=+121.511912306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.270235 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" event={"ID":"a29599f5-a122-4891-a597-051ac247d945","Type":"ContainerStarted","Data":"64e681078c1fc635bd80006728c90313529b764f692df6f3dcd54a73d6dd7d1d"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.325935 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" event={"ID":"14f90559-e69f-445c-a61a-2b0f881abb72","Type":"ContainerStarted","Data":"88efc4076cacf1c8c710d278485c6fc154a78196cbacb8d015c19ac6fc8fa982"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.335970 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:06 crc kubenswrapper[4917]: E1212 00:08:06.342026 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:06.842005936 +0000 UTC m=+121.619806749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.383809 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" event={"ID":"6e70d859-42e8-4d15-85be-8456028abbc5","Type":"ContainerStarted","Data":"348f336352e07e70d0c4cd8c1f80df1a71c0792237fb02c5360651bbe202bd20"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.440964 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" event={"ID":"1806fefb-c93c-4c0d-a7c8-dde659a77fbd","Type":"ContainerStarted","Data":"947c42f1318211df7681590f626cd1dd3ef080703a44220932438ac8bc899f53"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.441256 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:06 crc kubenswrapper[4917]: E1212 00:08:06.441608 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:06.941589286 +0000 UTC m=+121.719390109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.441919 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:06 crc kubenswrapper[4917]: E1212 00:08:06.444221 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:06.94420639 +0000 UTC m=+121.722007203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.454703 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nplkn" podStartSLOduration=97.454685807 podStartE2EDuration="1m37.454685807s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:06.361994316 +0000 UTC m=+121.139795129" watchObservedRunningTime="2025-12-12 00:08:06.454685807 +0000 UTC m=+121.232486630" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.456052 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qsfsf" podStartSLOduration=97.45604716 podStartE2EDuration="1m37.45604716s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:06.454288877 +0000 UTC m=+121.232089700" watchObservedRunningTime="2025-12-12 00:08:06.45604716 +0000 UTC m=+121.233847973" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.460322 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" event={"ID":"a8b3bec2-0c51-46d8-9b79-22f802b58962","Type":"ContainerStarted","Data":"fe302096bfdd7c9852d3db8142119bdbd2dfd2ed055214ca3f47b7e312f7d680"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.549385 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:06 crc kubenswrapper[4917]: E1212 00:08:06.550184 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.050166156 +0000 UTC m=+121.827966969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.553534 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:06 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:06 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:06 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.553609 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.555117 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t59j6" event={"ID":"6d6ecfb5-06aa-4f53-bc43-74d425b172db","Type":"ContainerStarted","Data":"5b86bf462a0bd9409905d5e7ed1cbd75168e8cab048df367102c68578a329b1a"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.572475 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" podStartSLOduration=97.572450402 podStartE2EDuration="1m37.572450402s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:06.527412979 +0000 UTC m=+121.305213802" watchObservedRunningTime="2025-12-12 00:08:06.572450402 +0000 UTC m=+121.350251215" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.624511 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pzbrh" event={"ID":"63b111b7-6fe4-4dba-9c9e-eec6186f2ba2","Type":"ContainerStarted","Data":"ee8a2a3ff9ba905fe4ee68628dc5d65ac6af670853177ff5dc8da9031cb95a93"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.625418 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pzbrh" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.644865 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" podStartSLOduration=97.644831386 podStartE2EDuration="1m37.644831386s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:06.621970696 +0000 UTC m=+121.399771519" watchObservedRunningTime="2025-12-12 00:08:06.644831386 +0000 UTC m=+121.422632219" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.664288 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:06 crc kubenswrapper[4917]: E1212 00:08:06.664546 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.164536168 +0000 UTC m=+121.942336981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.677920 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" event={"ID":"d69caa5d-484b-4415-8c05-61b9a2729dd7","Type":"ContainerStarted","Data":"bb8e02cb00133b1e7683d57169e783ded74919d6ea293089bd264a61ad4c09e4"} Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.686464 4917 patch_prober.go:28] interesting pod/downloads-7954f5f757-jctrq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.686512 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jctrq" podUID="f3af21ad-4bfb-4640-9589-c46313fb2379" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.690976 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.716340 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9th2b" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.736447 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4fk57" podStartSLOduration=97.736414919 podStartE2EDuration="1m37.736414919s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:06.730890395 +0000 UTC m=+121.508691208" watchObservedRunningTime="2025-12-12 00:08:06.736414919 +0000 UTC m=+121.514215732" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.768807 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:06 crc kubenswrapper[4917]: E1212 00:08:06.769841 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.269789377 +0000 UTC m=+122.047590190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.870534 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:06 crc kubenswrapper[4917]: E1212 00:08:06.874914 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.374898272 +0000 UTC m=+122.152699085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.938826 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pzbrh" podStartSLOduration=9.938784828 podStartE2EDuration="9.938784828s" podCreationTimestamp="2025-12-12 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:06.935106668 +0000 UTC m=+121.712907481" watchObservedRunningTime="2025-12-12 00:08:06.938784828 +0000 UTC m=+121.716585641" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.968441 4917 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-c84zc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.968570 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" podUID="22813008-d948-4643-a6d2-688ce469bada" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.972078 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:06 crc kubenswrapper[4917]: E1212 00:08:06.972594 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.472573556 +0000 UTC m=+122.250374369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:06 crc kubenswrapper[4917]: I1212 00:08:06.977176 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-fhrmw" podStartSLOduration=97.977155758 podStartE2EDuration="1m37.977155758s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:06.975755803 +0000 UTC m=+121.753556626" watchObservedRunningTime="2025-12-12 00:08:06.977155758 +0000 UTC m=+121.754956571" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.037991 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" podStartSLOduration=98.037957328 podStartE2EDuration="1m38.037957328s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:07.036134304 +0000 UTC m=+121.813935117" watchObservedRunningTime="2025-12-12 00:08:07.037957328 +0000 UTC m=+121.815758141" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.081662 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.081997 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.581985896 +0000 UTC m=+122.359786709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.183632 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.183870 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.683833602 +0000 UTC m=+122.461634425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.184004 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.184417 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.684399785 +0000 UTC m=+122.462200598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.285147 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.285738 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.785709848 +0000 UTC m=+122.563510661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.387748 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.388304 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.888289051 +0000 UTC m=+122.666089864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.405152 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f5t8z"] Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.406421 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.408589 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.426442 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5t8z"] Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.489632 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.489838 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.989788949 +0000 UTC m=+122.767589772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.490115 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-catalog-content\") pod \"certified-operators-f5t8z\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.490212 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.490246 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7qzp\" (UniqueName: \"kubernetes.io/projected/f59fe677-4717-4fd4-8491-6f9d68ab5a54-kube-api-access-j7qzp\") pod \"certified-operators-f5t8z\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.490291 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-utilities\") pod \"certified-operators-f5t8z\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.490698 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:07.99068115 +0000 UTC m=+122.768481953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.556143 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:07 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:07 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:07 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.556220 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.592013 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.592377 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7qzp\" (UniqueName: \"kubernetes.io/projected/f59fe677-4717-4fd4-8491-6f9d68ab5a54-kube-api-access-j7qzp\") pod \"certified-operators-f5t8z\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.592429 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-utilities\") pod \"certified-operators-f5t8z\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.592494 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-catalog-content\") pod \"certified-operators-f5t8z\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.592692 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:08.092672739 +0000 UTC m=+122.870473552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.593148 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-catalog-content\") pod \"certified-operators-f5t8z\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.593348 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-utilities\") pod \"certified-operators-f5t8z\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.625274 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r7vjh"] Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.626485 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.644688 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.649846 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r7vjh"] Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.657992 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7qzp\" (UniqueName: \"kubernetes.io/projected/f59fe677-4717-4fd4-8491-6f9d68ab5a54-kube-api-access-j7qzp\") pod \"certified-operators-f5t8z\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.731036 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-utilities\") pod \"community-operators-r7vjh\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.731115 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-catalog-content\") pod \"community-operators-r7vjh\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.731256 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.731306 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk68v\" (UniqueName: \"kubernetes.io/projected/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-kube-api-access-rk68v\") pod \"community-operators-r7vjh\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.731321 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.733837 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:08.233818597 +0000 UTC m=+123.011619400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.749273 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fqm67" event={"ID":"6e0d83f2-3607-4608-b778-92e9cc6eb572","Type":"ContainerStarted","Data":"839ec51e9cbaa1694f623ec0b255e92e3a782d7752d572aea20445a203a90b57"} Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.749341 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fqm67" event={"ID":"6e0d83f2-3607-4608-b778-92e9cc6eb572","Type":"ContainerStarted","Data":"99a02f83d8a7a50ec1f6d09671435dfd2615403ef750b2ec4fd7a0e22f1f1702"} Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.751358 4917 patch_prober.go:28] interesting pod/downloads-7954f5f757-jctrq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.751427 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jctrq" podUID="f3af21ad-4bfb-4640-9589-c46313fb2379" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.783331 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6m2j6" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.786756 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c84zc" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.831978 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.832145 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:08.332132916 +0000 UTC m=+123.109933729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.832554 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.832618 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk68v\" (UniqueName: \"kubernetes.io/projected/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-kube-api-access-rk68v\") pod \"community-operators-r7vjh\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.832795 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-utilities\") pod \"community-operators-r7vjh\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.832827 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-catalog-content\") pod \"community-operators-r7vjh\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.833333 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-catalog-content\") pod \"community-operators-r7vjh\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.833628 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:08.333617242 +0000 UTC m=+123.111418055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.835843 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-utilities\") pod \"community-operators-r7vjh\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.839696 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mpfr6"] Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.841012 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.861155 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpfr6"] Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.888770 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk68v\" (UniqueName: \"kubernetes.io/projected/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-kube-api-access-rk68v\") pod \"community-operators-r7vjh\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.942375 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:07 crc kubenswrapper[4917]: E1212 00:08:07.944908 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:08.444865459 +0000 UTC m=+123.222666582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:07 crc kubenswrapper[4917]: I1212 00:08:07.945480 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.015121 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h8pzh"] Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.021487 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.041836 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8pzh"] Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.046558 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddgd\" (UniqueName: \"kubernetes.io/projected/cbd23e21-81a0-4569-8eae-32be89672db5-kube-api-access-mddgd\") pod \"certified-operators-mpfr6\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.046622 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-utilities\") pod \"certified-operators-mpfr6\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.046700 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.046744 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-catalog-content\") pod \"certified-operators-mpfr6\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:08:08 crc kubenswrapper[4917]: E1212 00:08:08.047311 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:08.547292618 +0000 UTC m=+123.325093431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.151178 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.151425 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-catalog-content\") pod \"certified-operators-mpfr6\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.151452 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-catalog-content\") pod \"community-operators-h8pzh\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.151509 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5mhx\" (UniqueName: \"kubernetes.io/projected/c5e1046c-65b8-41b2-8bec-d7af367add71-kube-api-access-g5mhx\") pod \"community-operators-h8pzh\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.151536 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddgd\" (UniqueName: \"kubernetes.io/projected/cbd23e21-81a0-4569-8eae-32be89672db5-kube-api-access-mddgd\") pod \"certified-operators-mpfr6\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.151583 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-utilities\") pod \"community-operators-h8pzh\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.151609 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-utilities\") pod \"certified-operators-mpfr6\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:08:08 crc kubenswrapper[4917]: E1212 00:08:08.152423 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:08.652374613 +0000 UTC m=+123.430175426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.152440 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-utilities\") pod \"certified-operators-mpfr6\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.152721 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-catalog-content\") pod \"certified-operators-mpfr6\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.203596 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddgd\" (UniqueName: \"kubernetes.io/projected/cbd23e21-81a0-4569-8eae-32be89672db5-kube-api-access-mddgd\") pod \"certified-operators-mpfr6\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.254996 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-catalog-content\") pod \"community-operators-h8pzh\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.255093 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5mhx\" (UniqueName: \"kubernetes.io/projected/c5e1046c-65b8-41b2-8bec-d7af367add71-kube-api-access-g5mhx\") pod \"community-operators-h8pzh\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.255159 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-utilities\") pod \"community-operators-h8pzh\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.255220 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:08 crc kubenswrapper[4917]: E1212 00:08:08.255573 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:08.75555776 +0000 UTC m=+123.533358573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.256133 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-catalog-content\") pod \"community-operators-h8pzh\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.256666 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-utilities\") pod \"community-operators-h8pzh\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.294867 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5mhx\" (UniqueName: \"kubernetes.io/projected/c5e1046c-65b8-41b2-8bec-d7af367add71-kube-api-access-g5mhx\") pod \"community-operators-h8pzh\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.356943 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:08 crc kubenswrapper[4917]: E1212 00:08:08.357284 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:08.857266832 +0000 UTC m=+123.635067645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.362134 4917 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.394661 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.456234 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5t8z"] Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.461682 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:08 crc kubenswrapper[4917]: E1212 00:08:08.462051 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:08.96203703 +0000 UTC m=+123.739837853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.501896 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.557320 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:08 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:08 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:08 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.557386 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.564719 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:08 crc kubenswrapper[4917]: E1212 00:08:08.565306 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 00:08:09.065281319 +0000 UTC m=+123.843082132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.667810 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:08 crc kubenswrapper[4917]: E1212 00:08:08.668968 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 00:08:09.168951699 +0000 UTC m=+123.946752512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-th48g" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.713480 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8pzh"] Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.753504 4917 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-12T00:08:08.362158783Z","Handler":null,"Name":""} Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.755689 4917 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.755733 4917 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.770066 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fqm67" event={"ID":"6e0d83f2-3607-4608-b778-92e9cc6eb572","Type":"ContainerStarted","Data":"a888b5fc457798ad08d20a3fc0222993c79e0e401148aa78a25ea87b81de19b2"} Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.770338 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.784625 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8pzh" event={"ID":"c5e1046c-65b8-41b2-8bec-d7af367add71","Type":"ContainerStarted","Data":"fd7efc75ac3ea5eeaaac0e86c26cea5c147c5614831b19ff6d8ebf31f20b5ed3"} Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.787768 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5t8z" event={"ID":"f59fe677-4717-4fd4-8491-6f9d68ab5a54","Type":"ContainerStarted","Data":"8c224837e69673c38960cc4e064afa1c4d943f19712dced0e4850734ebf6ef2d"} Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.787792 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5t8z" event={"ID":"f59fe677-4717-4fd4-8491-6f9d68ab5a54","Type":"ContainerStarted","Data":"82624f4c0391c209bf0cae0709dadf658400fceeb60031aa4e7315bef37b0adb"} Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.788680 4917 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.804447 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r7vjh"] Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.808072 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-fqm67" podStartSLOduration=11.808040107 podStartE2EDuration="11.808040107s" podCreationTimestamp="2025-12-12 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:08.806315265 +0000 UTC m=+123.584116088" watchObservedRunningTime="2025-12-12 00:08:08.808040107 +0000 UTC m=+123.585840920" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.825471 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 00:08:08 crc kubenswrapper[4917]: W1212 00:08:08.841691 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd23e21_81a0_4569_8eae_32be89672db5.slice/crio-0e7a15eabb6d2957567d76fbd97f8df0ef186aaa810f0884a206dff56964c216 WatchSource:0}: Error finding container 0e7a15eabb6d2957567d76fbd97f8df0ef186aaa810f0884a206dff56964c216: Status 404 returned error can't find the container with id 0e7a15eabb6d2957567d76fbd97f8df0ef186aaa810f0884a206dff56964c216 Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.864093 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpfr6"] Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.878189 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.908917 4917 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.909001 4917 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:08 crc kubenswrapper[4917]: I1212 00:08:08.983798 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-th48g\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.111271 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.119899 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.168001 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.168420 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.183112 4917 patch_prober.go:28] interesting pod/console-f9d7485db-8frb7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.183228 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8frb7" podUID="234c3156-bf4c-464d-8ee4-957474f3bb82" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.209415 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.209488 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.217769 4917 patch_prober.go:28] interesting pod/apiserver-76f77b778f-lss4q container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]log ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]etcd ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]poststarthook/max-in-flight-filter ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 12 00:08:09 crc kubenswrapper[4917]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 12 00:08:09 crc kubenswrapper[4917]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]poststarthook/project.openshift.io-projectcache ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]poststarthook/openshift.io-startinformers ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 12 00:08:09 crc kubenswrapper[4917]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 00:08:09 crc kubenswrapper[4917]: livez check failed Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.217840 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" podUID="a8b3bec2-0c51-46d8-9b79-22f802b58962" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.412238 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k7ml9"] Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.414059 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.422116 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.446508 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7ml9"] Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.490742 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-utilities\") pod \"redhat-marketplace-k7ml9\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.490783 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-catalog-content\") pod \"redhat-marketplace-k7ml9\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.490843 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7z7f\" (UniqueName: \"kubernetes.io/projected/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-kube-api-access-t7z7f\") pod \"redhat-marketplace-k7ml9\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.525280 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-th48g"] Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.540989 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:09 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:09 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:09 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.541065 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.585158 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.585234 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.591718 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7z7f\" (UniqueName: \"kubernetes.io/projected/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-kube-api-access-t7z7f\") pod \"redhat-marketplace-k7ml9\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.591928 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-utilities\") pod \"redhat-marketplace-k7ml9\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.592053 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-catalog-content\") pod \"redhat-marketplace-k7ml9\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.592526 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-utilities\") pod \"redhat-marketplace-k7ml9\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.592726 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-catalog-content\") pod \"redhat-marketplace-k7ml9\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.608734 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.611191 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.616330 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7z7f\" (UniqueName: \"kubernetes.io/projected/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-kube-api-access-t7z7f\") pod \"redhat-marketplace-k7ml9\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.735376 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.736432 4917 patch_prober.go:28] interesting pod/downloads-7954f5f757-jctrq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.736461 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jctrq" podUID="f3af21ad-4bfb-4640-9589-c46313fb2379" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.736918 4917 patch_prober.go:28] interesting pod/downloads-7954f5f757-jctrq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.736943 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jctrq" podUID="f3af21ad-4bfb-4640-9589-c46313fb2379" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.738388 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.739281 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.740754 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.741598 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.757975 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.794975 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qqcm8"] Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.795623 4917 generic.go:334] "Generic (PLEG): container finished" podID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" containerID="8c224837e69673c38960cc4e064afa1c4d943f19712dced0e4850734ebf6ef2d" exitCode=0 Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.795992 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5t8z" event={"ID":"f59fe677-4717-4fd4-8491-6f9d68ab5a54","Type":"ContainerDied","Data":"8c224837e69673c38960cc4e064afa1c4d943f19712dced0e4850734ebf6ef2d"} Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.796108 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.806897 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqcm8"] Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.810947 4917 generic.go:334] "Generic (PLEG): container finished" podID="5c4ee0d0-175d-436c-9161-2822246aacec" containerID="6fee1af40784151c459d53bc5c9c41f32cbb0868b36b60de3fb433bc2272b9d3" exitCode=0 Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.811119 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" event={"ID":"5c4ee0d0-175d-436c-9161-2822246aacec","Type":"ContainerDied","Data":"6fee1af40784151c459d53bc5c9c41f32cbb0868b36b60de3fb433bc2272b9d3"} Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.813340 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfr6" event={"ID":"cbd23e21-81a0-4569-8eae-32be89672db5","Type":"ContainerStarted","Data":"0e7a15eabb6d2957567d76fbd97f8df0ef186aaa810f0884a206dff56964c216"} Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.817804 4917 generic.go:334] "Generic (PLEG): container finished" podID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerID="495e2dddc48b1d49fd4cd6479a1410576b00e7777f98542b3daa7f5f64f4cea3" exitCode=0 Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.817946 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8pzh" event={"ID":"c5e1046c-65b8-41b2-8bec-d7af367add71","Type":"ContainerDied","Data":"495e2dddc48b1d49fd4cd6479a1410576b00e7777f98542b3daa7f5f64f4cea3"} Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.831059 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" event={"ID":"ac4662d0-8501-4627-81b8-fdfffff90309","Type":"ContainerStarted","Data":"59e19c310f1c0b78bf7fc31a08b3441ca5b533e3eca6b256017962fcbeb9d7a1"} Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.841277 4917 generic.go:334] "Generic (PLEG): container finished" podID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" containerID="ac9b679c8dd00a9e297fa3d4300f6ec851ab7f7f648b0f41f1467b4a3c238661" exitCode=0 Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.841833 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7vjh" event={"ID":"040517b1-b5e4-46e0-90c9-4fb4a7e5726f","Type":"ContainerDied","Data":"ac9b679c8dd00a9e297fa3d4300f6ec851ab7f7f648b0f41f1467b4a3c238661"} Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.841923 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7vjh" event={"ID":"040517b1-b5e4-46e0-90c9-4fb4a7e5726f","Type":"ContainerStarted","Data":"dbb1e5692a9d30d975bd93b10e043b0ad54b7962426114514ad18898bfe0a7a0"} Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.851289 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cn77" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.895800 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4083b8b0-c09b-4b0c-86a8-7f11f98f0112\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.895907 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-catalog-content\") pod \"redhat-marketplace-qqcm8\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.895935 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g628c\" (UniqueName: \"kubernetes.io/projected/97d0ec19-7dd2-4401-86bd-1e3e6074801c-kube-api-access-g628c\") pod \"redhat-marketplace-qqcm8\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.895974 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4083b8b0-c09b-4b0c-86a8-7f11f98f0112\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.896040 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-utilities\") pod \"redhat-marketplace-qqcm8\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.997774 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4083b8b0-c09b-4b0c-86a8-7f11f98f0112\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.997921 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-catalog-content\") pod \"redhat-marketplace-qqcm8\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.997930 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4083b8b0-c09b-4b0c-86a8-7f11f98f0112\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.997960 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g628c\" (UniqueName: \"kubernetes.io/projected/97d0ec19-7dd2-4401-86bd-1e3e6074801c-kube-api-access-g628c\") pod \"redhat-marketplace-qqcm8\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.998424 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4083b8b0-c09b-4b0c-86a8-7f11f98f0112\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.998607 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-utilities\") pod \"redhat-marketplace-qqcm8\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:08:09 crc kubenswrapper[4917]: I1212 00:08:09.999881 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-catalog-content\") pod \"redhat-marketplace-qqcm8\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.000145 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-utilities\") pod \"redhat-marketplace-qqcm8\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.027814 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4083b8b0-c09b-4b0c-86a8-7f11f98f0112\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.057672 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.066463 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7ml9"] Dec 12 00:08:10 crc kubenswrapper[4917]: W1212 00:08:10.078360 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1fe8325_d2d0_4418_8c57_cdb509c32ce6.slice/crio-2c082597ef4c3f8654fc8518a66c197b57345e1f9dd0f6548d019b04c0e065fb WatchSource:0}: Error finding container 2c082597ef4c3f8654fc8518a66c197b57345e1f9dd0f6548d019b04c0e065fb: Status 404 returned error can't find the container with id 2c082597ef4c3f8654fc8518a66c197b57345e1f9dd0f6548d019b04c0e065fb Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.153482 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g628c\" (UniqueName: \"kubernetes.io/projected/97d0ec19-7dd2-4401-86bd-1e3e6074801c-kube-api-access-g628c\") pod \"redhat-marketplace-qqcm8\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.305012 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.430375 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.536787 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.547947 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:10 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:10 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:10 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.548054 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.683417 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqcm8"] Dec 12 00:08:10 crc kubenswrapper[4917]: W1212 00:08:10.702919 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97d0ec19_7dd2_4401_86bd_1e3e6074801c.slice/crio-b5015474f9262f701f271443223f6b7798d3f81b4ae32fc20ad6c3363fda430a WatchSource:0}: Error finding container b5015474f9262f701f271443223f6b7798d3f81b4ae32fc20ad6c3363fda430a: Status 404 returned error can't find the container with id b5015474f9262f701f271443223f6b7798d3f81b4ae32fc20ad6c3363fda430a Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.816798 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bnjdr"] Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.818788 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.821479 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.842407 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bnjdr"] Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.861115 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" event={"ID":"ac4662d0-8501-4627-81b8-fdfffff90309","Type":"ContainerStarted","Data":"690ce27f806975eded134aa8cb8dc25d264ef84fde40707d094da7187fbbd0a3"} Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.861765 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.877450 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqcm8" event={"ID":"97d0ec19-7dd2-4401-86bd-1e3e6074801c","Type":"ContainerStarted","Data":"b5015474f9262f701f271443223f6b7798d3f81b4ae32fc20ad6c3363fda430a"} Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.902660 4917 generic.go:334] "Generic (PLEG): container finished" podID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerID="923287da3b62bb51eeb2811b37e0287f74fcdbff8e12d0c13405168cae19700d" exitCode=0 Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.902814 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7ml9" event={"ID":"c1fe8325-d2d0-4418-8c57-cdb509c32ce6","Type":"ContainerDied","Data":"923287da3b62bb51eeb2811b37e0287f74fcdbff8e12d0c13405168cae19700d"} Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.902856 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7ml9" event={"ID":"c1fe8325-d2d0-4418-8c57-cdb509c32ce6","Type":"ContainerStarted","Data":"2c082597ef4c3f8654fc8518a66c197b57345e1f9dd0f6548d019b04c0e065fb"} Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.922042 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" podStartSLOduration=101.922021062 podStartE2EDuration="1m41.922021062s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:08:10.918976307 +0000 UTC m=+125.696777130" watchObservedRunningTime="2025-12-12 00:08:10.922021062 +0000 UTC m=+125.699821875" Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.933588 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4083b8b0-c09b-4b0c-86a8-7f11f98f0112","Type":"ContainerStarted","Data":"9f75d7d9dd251225ddcecacfbd422419f21c1c45710997ac312419ab6f29da35"} Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.937556 4917 generic.go:334] "Generic (PLEG): container finished" podID="cbd23e21-81a0-4569-8eae-32be89672db5" containerID="895b97b261e58ac606716f6c6bba98bbd8c41ad55d9d76a2806c6625c2b71f29" exitCode=0 Dec 12 00:08:10 crc kubenswrapper[4917]: I1212 00:08:10.939086 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfr6" event={"ID":"cbd23e21-81a0-4569-8eae-32be89672db5","Type":"ContainerDied","Data":"895b97b261e58ac606716f6c6bba98bbd8c41ad55d9d76a2806c6625c2b71f29"} Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.017691 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-catalog-content\") pod \"redhat-operators-bnjdr\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.018330 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcc29\" (UniqueName: \"kubernetes.io/projected/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-kube-api-access-mcc29\") pod \"redhat-operators-bnjdr\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.018479 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-utilities\") pod \"redhat-operators-bnjdr\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.120292 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-catalog-content\") pod \"redhat-operators-bnjdr\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.120435 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcc29\" (UniqueName: \"kubernetes.io/projected/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-kube-api-access-mcc29\") pod \"redhat-operators-bnjdr\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.120484 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-utilities\") pod \"redhat-operators-bnjdr\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.121091 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-catalog-content\") pod \"redhat-operators-bnjdr\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.122083 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-utilities\") pod \"redhat-operators-bnjdr\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.146407 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcc29\" (UniqueName: \"kubernetes.io/projected/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-kube-api-access-mcc29\") pod \"redhat-operators-bnjdr\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.216242 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f2zgs"] Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.218266 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.255589 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f2zgs"] Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.324484 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-utilities\") pod \"redhat-operators-f2zgs\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.325693 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzcqs\" (UniqueName: \"kubernetes.io/projected/0e89623b-b8de-4d14-87bb-363bcbc0f859-kube-api-access-pzcqs\") pod \"redhat-operators-f2zgs\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.325850 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-catalog-content\") pod \"redhat-operators-f2zgs\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.336632 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.428040 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c4ee0d0-175d-436c-9161-2822246aacec-config-volume\") pod \"5c4ee0d0-175d-436c-9161-2822246aacec\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.428175 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c4ee0d0-175d-436c-9161-2822246aacec-secret-volume\") pod \"5c4ee0d0-175d-436c-9161-2822246aacec\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.428258 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67vbc\" (UniqueName: \"kubernetes.io/projected/5c4ee0d0-175d-436c-9161-2822246aacec-kube-api-access-67vbc\") pod \"5c4ee0d0-175d-436c-9161-2822246aacec\" (UID: \"5c4ee0d0-175d-436c-9161-2822246aacec\") " Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.429116 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-utilities\") pod \"redhat-operators-f2zgs\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.429514 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzcqs\" (UniqueName: \"kubernetes.io/projected/0e89623b-b8de-4d14-87bb-363bcbc0f859-kube-api-access-pzcqs\") pod \"redhat-operators-f2zgs\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.429512 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c4ee0d0-175d-436c-9161-2822246aacec-config-volume" (OuterVolumeSpecName: "config-volume") pod "5c4ee0d0-175d-436c-9161-2822246aacec" (UID: "5c4ee0d0-175d-436c-9161-2822246aacec"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.429546 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-catalog-content\") pod \"redhat-operators-f2zgs\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.430305 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-utilities\") pod \"redhat-operators-f2zgs\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.431097 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-catalog-content\") pod \"redhat-operators-f2zgs\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.431337 4917 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c4ee0d0-175d-436c-9161-2822246aacec-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.436433 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.448068 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzcqs\" (UniqueName: \"kubernetes.io/projected/0e89623b-b8de-4d14-87bb-363bcbc0f859-kube-api-access-pzcqs\") pod \"redhat-operators-f2zgs\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.448970 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4ee0d0-175d-436c-9161-2822246aacec-kube-api-access-67vbc" (OuterVolumeSpecName: "kube-api-access-67vbc") pod "5c4ee0d0-175d-436c-9161-2822246aacec" (UID: "5c4ee0d0-175d-436c-9161-2822246aacec"). InnerVolumeSpecName "kube-api-access-67vbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.449066 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4ee0d0-175d-436c-9161-2822246aacec-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5c4ee0d0-175d-436c-9161-2822246aacec" (UID: "5c4ee0d0-175d-436c-9161-2822246aacec"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.511156 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pzbrh" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.533053 4917 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c4ee0d0-175d-436c-9161-2822246aacec-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.533441 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67vbc\" (UniqueName: \"kubernetes.io/projected/5c4ee0d0-175d-436c-9161-2822246aacec-kube-api-access-67vbc\") on node \"crc\" DevicePath \"\"" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.551305 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:11 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:11 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:11 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.551388 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.554234 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:08:11 crc kubenswrapper[4917]: I1212 00:08:11.859240 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bnjdr"] Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.033238 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqcm8" event={"ID":"97d0ec19-7dd2-4401-86bd-1e3e6074801c","Type":"ContainerDied","Data":"c4347b122b83e84475d46fe99d0ac0711959ac0b57e9282d88a75c0e0ee84d0b"} Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.033049 4917 generic.go:334] "Generic (PLEG): container finished" podID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerID="c4347b122b83e84475d46fe99d0ac0711959ac0b57e9282d88a75c0e0ee84d0b" exitCode=0 Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.038437 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnjdr" event={"ID":"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2","Type":"ContainerStarted","Data":"5f30387d2805cca96b34fc30b974d148a04794fc94eb14dee834560ebbbffc27"} Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.046580 4917 generic.go:334] "Generic (PLEG): container finished" podID="4083b8b0-c09b-4b0c-86a8-7f11f98f0112" containerID="8e97243345d02a84a0eb5158c9e090303f189b8b76936c3ed496330022909865" exitCode=0 Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.046849 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4083b8b0-c09b-4b0c-86a8-7f11f98f0112","Type":"ContainerDied","Data":"8e97243345d02a84a0eb5158c9e090303f189b8b76936c3ed496330022909865"} Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.066357 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.067326 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" event={"ID":"5c4ee0d0-175d-436c-9161-2822246aacec","Type":"ContainerDied","Data":"51ef050a493501d03d16680ae0435cab851eec2e45109f17376179aa7ffff7ae"} Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.067478 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ef050a493501d03d16680ae0435cab851eec2e45109f17376179aa7ffff7ae" Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.067665 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr" Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.092447 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f2zgs"] Dec 12 00:08:12 crc kubenswrapper[4917]: W1212 00:08:12.128492 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e89623b_b8de_4d14_87bb_363bcbc0f859.slice/crio-7ce044c2870115babf80cac225887ccbc5f8d542db7591f6228ac6c45c3a4c10 WatchSource:0}: Error finding container 7ce044c2870115babf80cac225887ccbc5f8d542db7591f6228ac6c45c3a4c10: Status 404 returned error can't find the container with id 7ce044c2870115babf80cac225887ccbc5f8d542db7591f6228ac6c45c3a4c10 Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.541998 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:12 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:12 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:12 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:12 crc kubenswrapper[4917]: I1212 00:08:12.542382 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.083781 4917 generic.go:334] "Generic (PLEG): container finished" podID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerID="8563ff0f2d4502978d6940b81821d6181991a63cd0bd318c2a35cec2e6cf5e19" exitCode=0 Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.083839 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2zgs" event={"ID":"0e89623b-b8de-4d14-87bb-363bcbc0f859","Type":"ContainerDied","Data":"8563ff0f2d4502978d6940b81821d6181991a63cd0bd318c2a35cec2e6cf5e19"} Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.083903 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2zgs" event={"ID":"0e89623b-b8de-4d14-87bb-363bcbc0f859","Type":"ContainerStarted","Data":"7ce044c2870115babf80cac225887ccbc5f8d542db7591f6228ac6c45c3a4c10"} Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.091959 4917 generic.go:334] "Generic (PLEG): container finished" podID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerID="3bb836eec10431da814c80124ef8d882b8c0d8f472902af25b7610b912f2ae16" exitCode=0 Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.092099 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnjdr" event={"ID":"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2","Type":"ContainerDied","Data":"3bb836eec10431da814c80124ef8d882b8c0d8f472902af25b7610b912f2ae16"} Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.440209 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.540665 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:13 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:13 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:13 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.540755 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.591142 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kube-api-access\") pod \"4083b8b0-c09b-4b0c-86a8-7f11f98f0112\" (UID: \"4083b8b0-c09b-4b0c-86a8-7f11f98f0112\") " Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.591458 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kubelet-dir\") pod \"4083b8b0-c09b-4b0c-86a8-7f11f98f0112\" (UID: \"4083b8b0-c09b-4b0c-86a8-7f11f98f0112\") " Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.591544 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4083b8b0-c09b-4b0c-86a8-7f11f98f0112" (UID: "4083b8b0-c09b-4b0c-86a8-7f11f98f0112"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.592003 4917 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.612862 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4083b8b0-c09b-4b0c-86a8-7f11f98f0112" (UID: "4083b8b0-c09b-4b0c-86a8-7f11f98f0112"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.693290 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4083b8b0-c09b-4b0c-86a8-7f11f98f0112-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.715930 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 12 00:08:13 crc kubenswrapper[4917]: E1212 00:08:13.716494 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4083b8b0-c09b-4b0c-86a8-7f11f98f0112" containerName="pruner" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.720907 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="4083b8b0-c09b-4b0c-86a8-7f11f98f0112" containerName="pruner" Dec 12 00:08:13 crc kubenswrapper[4917]: E1212 00:08:13.721151 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4ee0d0-175d-436c-9161-2822246aacec" containerName="collect-profiles" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.721228 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4ee0d0-175d-436c-9161-2822246aacec" containerName="collect-profiles" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.721592 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4ee0d0-175d-436c-9161-2822246aacec" containerName="collect-profiles" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.721695 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="4083b8b0-c09b-4b0c-86a8-7f11f98f0112" containerName="pruner" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.722203 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.733587 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.742831 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.743578 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.895466 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1807fdde-9c39-45ca-acc0-88c7081a265f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1807fdde-9c39-45ca-acc0-88c7081a265f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.895556 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1807fdde-9c39-45ca-acc0-88c7081a265f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1807fdde-9c39-45ca-acc0-88c7081a265f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.997723 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1807fdde-9c39-45ca-acc0-88c7081a265f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1807fdde-9c39-45ca-acc0-88c7081a265f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.997846 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1807fdde-9c39-45ca-acc0-88c7081a265f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1807fdde-9c39-45ca-acc0-88c7081a265f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 00:08:13 crc kubenswrapper[4917]: I1212 00:08:13.998035 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1807fdde-9c39-45ca-acc0-88c7081a265f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1807fdde-9c39-45ca-acc0-88c7081a265f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 00:08:14 crc kubenswrapper[4917]: I1212 00:08:14.032460 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1807fdde-9c39-45ca-acc0-88c7081a265f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1807fdde-9c39-45ca-acc0-88c7081a265f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 00:08:14 crc kubenswrapper[4917]: I1212 00:08:14.068438 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 00:08:14 crc kubenswrapper[4917]: I1212 00:08:14.123585 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4083b8b0-c09b-4b0c-86a8-7f11f98f0112","Type":"ContainerDied","Data":"9f75d7d9dd251225ddcecacfbd422419f21c1c45710997ac312419ab6f29da35"} Dec 12 00:08:14 crc kubenswrapper[4917]: I1212 00:08:14.123665 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f75d7d9dd251225ddcecacfbd422419f21c1c45710997ac312419ab6f29da35" Dec 12 00:08:14 crc kubenswrapper[4917]: I1212 00:08:14.123718 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 00:08:14 crc kubenswrapper[4917]: I1212 00:08:14.217083 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:08:14 crc kubenswrapper[4917]: I1212 00:08:14.225676 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-lss4q" Dec 12 00:08:14 crc kubenswrapper[4917]: I1212 00:08:14.467555 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 12 00:08:14 crc kubenswrapper[4917]: W1212 00:08:14.489594 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1807fdde_9c39_45ca_acc0_88c7081a265f.slice/crio-733b60af5ee55dc50dd5f6aef95c34abdb2128f30f4acf4412d3ad82dc37d678 WatchSource:0}: Error finding container 733b60af5ee55dc50dd5f6aef95c34abdb2128f30f4acf4412d3ad82dc37d678: Status 404 returned error can't find the container with id 733b60af5ee55dc50dd5f6aef95c34abdb2128f30f4acf4412d3ad82dc37d678 Dec 12 00:08:14 crc kubenswrapper[4917]: I1212 00:08:14.543443 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:14 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:14 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:14 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:14 crc kubenswrapper[4917]: I1212 00:08:14.543506 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:15 crc kubenswrapper[4917]: I1212 00:08:15.139736 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1807fdde-9c39-45ca-acc0-88c7081a265f","Type":"ContainerStarted","Data":"733b60af5ee55dc50dd5f6aef95c34abdb2128f30f4acf4412d3ad82dc37d678"} Dec 12 00:08:15 crc kubenswrapper[4917]: I1212 00:08:15.539115 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:15 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:15 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:15 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:15 crc kubenswrapper[4917]: I1212 00:08:15.539180 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:16 crc kubenswrapper[4917]: I1212 00:08:16.158345 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1807fdde-9c39-45ca-acc0-88c7081a265f","Type":"ContainerStarted","Data":"9131d52d7fdc11897b7e37fd62c7ca2f87a6680ee176bc0bdf15fba3ab43ed66"} Dec 12 00:08:16 crc kubenswrapper[4917]: I1212 00:08:16.539899 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:16 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:16 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:16 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:16 crc kubenswrapper[4917]: I1212 00:08:16.540011 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:17 crc kubenswrapper[4917]: I1212 00:08:17.181201 4917 generic.go:334] "Generic (PLEG): container finished" podID="1807fdde-9c39-45ca-acc0-88c7081a265f" containerID="9131d52d7fdc11897b7e37fd62c7ca2f87a6680ee176bc0bdf15fba3ab43ed66" exitCode=0 Dec 12 00:08:17 crc kubenswrapper[4917]: I1212 00:08:17.181394 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1807fdde-9c39-45ca-acc0-88c7081a265f","Type":"ContainerDied","Data":"9131d52d7fdc11897b7e37fd62c7ca2f87a6680ee176bc0bdf15fba3ab43ed66"} Dec 12 00:08:17 crc kubenswrapper[4917]: I1212 00:08:17.540712 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:17 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:17 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:17 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:17 crc kubenswrapper[4917]: I1212 00:08:17.540775 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:18 crc kubenswrapper[4917]: I1212 00:08:18.539450 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:18 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:18 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:18 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:18 crc kubenswrapper[4917]: I1212 00:08:18.539507 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:19 crc kubenswrapper[4917]: I1212 00:08:19.168397 4917 patch_prober.go:28] interesting pod/console-f9d7485db-8frb7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 12 00:08:19 crc kubenswrapper[4917]: I1212 00:08:19.168464 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8frb7" podUID="234c3156-bf4c-464d-8ee4-957474f3bb82" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 12 00:08:19 crc kubenswrapper[4917]: I1212 00:08:19.539430 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:19 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:19 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:19 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:19 crc kubenswrapper[4917]: I1212 00:08:19.539536 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:19 crc kubenswrapper[4917]: I1212 00:08:19.743932 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jctrq" Dec 12 00:08:20 crc kubenswrapper[4917]: I1212 00:08:20.538685 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:20 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:20 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:20 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:20 crc kubenswrapper[4917]: I1212 00:08:20.539069 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:21 crc kubenswrapper[4917]: I1212 00:08:21.539915 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:21 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:21 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:21 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:21 crc kubenswrapper[4917]: I1212 00:08:21.540009 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:22 crc kubenswrapper[4917]: I1212 00:08:22.538473 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:22 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:22 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:22 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:22 crc kubenswrapper[4917]: I1212 00:08:22.538538 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:23 crc kubenswrapper[4917]: I1212 00:08:23.539215 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:23 crc kubenswrapper[4917]: [-]has-synced failed: reason withheld Dec 12 00:08:23 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:23 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:23 crc kubenswrapper[4917]: I1212 00:08:23.539323 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:24 crc kubenswrapper[4917]: I1212 00:08:24.539202 4917 patch_prober.go:28] interesting pod/router-default-5444994796-thf67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 00:08:24 crc kubenswrapper[4917]: [+]has-synced ok Dec 12 00:08:24 crc kubenswrapper[4917]: [+]process-running ok Dec 12 00:08:24 crc kubenswrapper[4917]: healthz check failed Dec 12 00:08:24 crc kubenswrapper[4917]: I1212 00:08:24.539307 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-thf67" podUID="ca21bb76-0d5b-4125-aa3a-a8c9c4a3477f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 00:08:25 crc kubenswrapper[4917]: I1212 00:08:25.540329 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:25 crc kubenswrapper[4917]: I1212 00:08:25.544121 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-thf67" Dec 12 00:08:26 crc kubenswrapper[4917]: I1212 00:08:26.020118 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 00:08:26 crc kubenswrapper[4917]: I1212 00:08:26.085896 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1807fdde-9c39-45ca-acc0-88c7081a265f-kubelet-dir\") pod \"1807fdde-9c39-45ca-acc0-88c7081a265f\" (UID: \"1807fdde-9c39-45ca-acc0-88c7081a265f\") " Dec 12 00:08:26 crc kubenswrapper[4917]: I1212 00:08:26.085953 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1807fdde-9c39-45ca-acc0-88c7081a265f-kube-api-access\") pod \"1807fdde-9c39-45ca-acc0-88c7081a265f\" (UID: \"1807fdde-9c39-45ca-acc0-88c7081a265f\") " Dec 12 00:08:26 crc kubenswrapper[4917]: I1212 00:08:26.086033 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1807fdde-9c39-45ca-acc0-88c7081a265f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1807fdde-9c39-45ca-acc0-88c7081a265f" (UID: "1807fdde-9c39-45ca-acc0-88c7081a265f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:08:26 crc kubenswrapper[4917]: I1212 00:08:26.086372 4917 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1807fdde-9c39-45ca-acc0-88c7081a265f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 00:08:26 crc kubenswrapper[4917]: I1212 00:08:26.093059 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1807fdde-9c39-45ca-acc0-88c7081a265f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1807fdde-9c39-45ca-acc0-88c7081a265f" (UID: "1807fdde-9c39-45ca-acc0-88c7081a265f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:08:26 crc kubenswrapper[4917]: I1212 00:08:26.188362 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1807fdde-9c39-45ca-acc0-88c7081a265f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 00:08:26 crc kubenswrapper[4917]: I1212 00:08:26.296228 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1807fdde-9c39-45ca-acc0-88c7081a265f","Type":"ContainerDied","Data":"733b60af5ee55dc50dd5f6aef95c34abdb2128f30f4acf4412d3ad82dc37d678"} Dec 12 00:08:26 crc kubenswrapper[4917]: I1212 00:08:26.296254 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 00:08:26 crc kubenswrapper[4917]: I1212 00:08:26.296269 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="733b60af5ee55dc50dd5f6aef95c34abdb2128f30f4acf4412d3ad82dc37d678" Dec 12 00:08:29 crc kubenswrapper[4917]: I1212 00:08:29.132947 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:08:29 crc kubenswrapper[4917]: I1212 00:08:29.321923 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:08:29 crc kubenswrapper[4917]: I1212 00:08:29.327123 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8frb7" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.500779 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.501195 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.503562 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.504180 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.519808 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.602165 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.602227 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.605203 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.615334 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.627548 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.628082 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.830702 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:08:31 crc kubenswrapper[4917]: I1212 00:08:31.836693 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 00:08:32 crc kubenswrapper[4917]: I1212 00:08:32.139926 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:08:32 crc kubenswrapper[4917]: I1212 00:08:32.423259 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 00:08:39 crc kubenswrapper[4917]: I1212 00:08:39.375483 4917 generic.go:334] "Generic (PLEG): container finished" podID="fe29aecd-7402-47c3-a15a-d5a489c48b29" containerID="e3f6d5e6097b505eb363fe2264106b8e938bde6586131165f84667e31eb15b39" exitCode=0 Dec 12 00:08:39 crc kubenswrapper[4917]: I1212 00:08:39.375594 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29424960-ttvzz" event={"ID":"fe29aecd-7402-47c3-a15a-d5a489c48b29","Type":"ContainerDied","Data":"e3f6d5e6097b505eb363fe2264106b8e938bde6586131165f84667e31eb15b39"} Dec 12 00:08:40 crc kubenswrapper[4917]: I1212 00:08:40.273893 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9wvw" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.303406 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 12 00:08:45 crc kubenswrapper[4917]: E1212 00:08:45.304170 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1807fdde-9c39-45ca-acc0-88c7081a265f" containerName="pruner" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.304186 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1807fdde-9c39-45ca-acc0-88c7081a265f" containerName="pruner" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.304297 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="1807fdde-9c39-45ca-acc0-88c7081a265f" containerName="pruner" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.304945 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.322242 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.322435 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.326793 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.406215 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73d83d51-dd4e-470a-af94-09ee620fe4da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73d83d51-dd4e-470a-af94-09ee620fe4da\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.406372 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73d83d51-dd4e-470a-af94-09ee620fe4da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73d83d51-dd4e-470a-af94-09ee620fe4da\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.508087 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73d83d51-dd4e-470a-af94-09ee620fe4da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73d83d51-dd4e-470a-af94-09ee620fe4da\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.508200 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73d83d51-dd4e-470a-af94-09ee620fe4da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73d83d51-dd4e-470a-af94-09ee620fe4da\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.508226 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73d83d51-dd4e-470a-af94-09ee620fe4da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73d83d51-dd4e-470a-af94-09ee620fe4da\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.528935 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73d83d51-dd4e-470a-af94-09ee620fe4da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73d83d51-dd4e-470a-af94-09ee620fe4da\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 00:08:45 crc kubenswrapper[4917]: I1212 00:08:45.646334 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 00:08:47 crc kubenswrapper[4917]: E1212 00:08:47.521919 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 12 00:08:47 crc kubenswrapper[4917]: E1212 00:08:47.522523 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk68v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-r7vjh_openshift-marketplace(040517b1-b5e4-46e0-90c9-4fb4a7e5726f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:08:47 crc kubenswrapper[4917]: E1212 00:08:47.523713 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-r7vjh" podUID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" Dec 12 00:08:50 crc kubenswrapper[4917]: E1212 00:08:50.203976 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r7vjh" podUID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.698624 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.700282 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.710351 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.787107 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kube-api-access\") pod \"installer-9-crc\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.787815 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-var-lock\") pod \"installer-9-crc\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.787864 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.889444 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kube-api-access\") pod \"installer-9-crc\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.889544 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-var-lock\") pod \"installer-9-crc\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.889587 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.889721 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.890271 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-var-lock\") pod \"installer-9-crc\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:08:50 crc kubenswrapper[4917]: I1212 00:08:50.913461 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kube-api-access\") pod \"installer-9-crc\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:08:51 crc kubenswrapper[4917]: I1212 00:08:51.047727 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:08:52 crc kubenswrapper[4917]: I1212 00:08:52.005865 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:08:52 crc kubenswrapper[4917]: I1212 00:08:52.008210 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 12 00:08:52 crc kubenswrapper[4917]: I1212 00:08:52.019244 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f4853b-9736-4a03-8c86-1627cb51acbe-metrics-certs\") pod \"network-metrics-daemon-f4t96\" (UID: \"58f4853b-9736-4a03-8c86-1627cb51acbe\") " pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:08:52 crc kubenswrapper[4917]: I1212 00:08:52.318374 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 12 00:08:52 crc kubenswrapper[4917]: I1212 00:08:52.326564 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f4t96" Dec 12 00:08:55 crc kubenswrapper[4917]: E1212 00:08:55.266938 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 12 00:08:55 crc kubenswrapper[4917]: E1212 00:08:55.267920 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5mhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h8pzh_openshift-marketplace(c5e1046c-65b8-41b2-8bec-d7af367add71): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:08:55 crc kubenswrapper[4917]: E1212 00:08:55.269761 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h8pzh" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" Dec 12 00:08:59 crc kubenswrapper[4917]: I1212 00:08:59.639284 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:08:59 crc kubenswrapper[4917]: I1212 00:08:59.639338 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:09:01 crc kubenswrapper[4917]: E1212 00:09:01.873215 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 12 00:09:01 crc kubenswrapper[4917]: E1212 00:09:01.873711 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcc29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bnjdr_openshift-marketplace(f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:09:01 crc kubenswrapper[4917]: E1212 00:09:01.875021 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bnjdr" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" Dec 12 00:09:05 crc kubenswrapper[4917]: E1212 00:09:05.789090 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bnjdr" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" Dec 12 00:09:10 crc kubenswrapper[4917]: E1212 00:09:10.896788 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 12 00:09:10 crc kubenswrapper[4917]: E1212 00:09:10.897300 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzcqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f2zgs_openshift-marketplace(0e89623b-b8de-4d14-87bb-363bcbc0f859): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:09:10 crc kubenswrapper[4917]: E1212 00:09:10.898568 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-f2zgs" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" Dec 12 00:09:17 crc kubenswrapper[4917]: E1212 00:09:17.945488 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 12 00:09:17 crc kubenswrapper[4917]: E1212 00:09:17.946158 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g628c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qqcm8_openshift-marketplace(97d0ec19-7dd2-4401-86bd-1e3e6074801c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:09:17 crc kubenswrapper[4917]: E1212 00:09:17.947400 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qqcm8" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" Dec 12 00:09:21 crc kubenswrapper[4917]: E1212 00:09:21.529255 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qqcm8" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" Dec 12 00:09:21 crc kubenswrapper[4917]: I1212 00:09:21.637466 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29424960-ttvzz" Dec 12 00:09:21 crc kubenswrapper[4917]: I1212 00:09:21.641756 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29424960-ttvzz" event={"ID":"fe29aecd-7402-47c3-a15a-d5a489c48b29","Type":"ContainerDied","Data":"f89a9a156b06ae5d5c4acaf2390d3f4269e41aecb7d95526ecdd067cf13729f5"} Dec 12 00:09:21 crc kubenswrapper[4917]: I1212 00:09:21.641802 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f89a9a156b06ae5d5c4acaf2390d3f4269e41aecb7d95526ecdd067cf13729f5" Dec 12 00:09:21 crc kubenswrapper[4917]: I1212 00:09:21.814122 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe29aecd-7402-47c3-a15a-d5a489c48b29-serviceca\") pod \"fe29aecd-7402-47c3-a15a-d5a489c48b29\" (UID: \"fe29aecd-7402-47c3-a15a-d5a489c48b29\") " Dec 12 00:09:21 crc kubenswrapper[4917]: I1212 00:09:21.814544 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-857sk\" (UniqueName: \"kubernetes.io/projected/fe29aecd-7402-47c3-a15a-d5a489c48b29-kube-api-access-857sk\") pod \"fe29aecd-7402-47c3-a15a-d5a489c48b29\" (UID: \"fe29aecd-7402-47c3-a15a-d5a489c48b29\") " Dec 12 00:09:21 crc kubenswrapper[4917]: I1212 00:09:21.814983 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe29aecd-7402-47c3-a15a-d5a489c48b29-serviceca" (OuterVolumeSpecName: "serviceca") pod "fe29aecd-7402-47c3-a15a-d5a489c48b29" (UID: "fe29aecd-7402-47c3-a15a-d5a489c48b29"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:09:21 crc kubenswrapper[4917]: I1212 00:09:21.819734 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe29aecd-7402-47c3-a15a-d5a489c48b29-kube-api-access-857sk" (OuterVolumeSpecName: "kube-api-access-857sk") pod "fe29aecd-7402-47c3-a15a-d5a489c48b29" (UID: "fe29aecd-7402-47c3-a15a-d5a489c48b29"). InnerVolumeSpecName "kube-api-access-857sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:09:21 crc kubenswrapper[4917]: E1212 00:09:21.831968 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 12 00:09:21 crc kubenswrapper[4917]: E1212 00:09:21.832108 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7qzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-f5t8z_openshift-marketplace(f59fe677-4717-4fd4-8491-6f9d68ab5a54): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:09:21 crc kubenswrapper[4917]: E1212 00:09:21.833331 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-f5t8z" podUID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" Dec 12 00:09:21 crc kubenswrapper[4917]: I1212 00:09:21.916181 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-857sk\" (UniqueName: \"kubernetes.io/projected/fe29aecd-7402-47c3-a15a-d5a489c48b29-kube-api-access-857sk\") on node \"crc\" DevicePath \"\"" Dec 12 00:09:21 crc kubenswrapper[4917]: I1212 00:09:21.916219 4917 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe29aecd-7402-47c3-a15a-d5a489c48b29-serviceca\") on node \"crc\" DevicePath \"\"" Dec 12 00:09:21 crc kubenswrapper[4917]: E1212 00:09:21.974954 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 12 00:09:21 crc kubenswrapper[4917]: E1212 00:09:21.975133 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7z7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-k7ml9_openshift-marketplace(c1fe8325-d2d0-4418-8c57-cdb509c32ce6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:09:21 crc kubenswrapper[4917]: E1212 00:09:21.976313 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-k7ml9" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" Dec 12 00:09:22 crc kubenswrapper[4917]: I1212 00:09:22.646657 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29424960-ttvzz" Dec 12 00:09:22 crc kubenswrapper[4917]: E1212 00:09:22.737155 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 12 00:09:22 crc kubenswrapper[4917]: E1212 00:09:22.737311 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mddgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mpfr6_openshift-marketplace(cbd23e21-81a0-4569-8eae-32be89672db5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:09:22 crc kubenswrapper[4917]: E1212 00:09:22.738501 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mpfr6" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" Dec 12 00:09:24 crc kubenswrapper[4917]: E1212 00:09:24.183915 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-k7ml9" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" Dec 12 00:09:24 crc kubenswrapper[4917]: E1212 00:09:24.186941 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-f5t8z" podUID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" Dec 12 00:09:24 crc kubenswrapper[4917]: I1212 00:09:24.352904 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 12 00:09:24 crc kubenswrapper[4917]: I1212 00:09:24.657937 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e9e4354d12197da655e85ec9beafbeb0905bf11bc229a9978f2d46c0ad670bca"} Dec 12 00:09:24 crc kubenswrapper[4917]: I1212 00:09:24.659332 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"836f21fd0003ed23a2d101b33b13cbf720df5a75e0a834d0ce542dc2d27ca115"} Dec 12 00:09:26 crc kubenswrapper[4917]: E1212 00:09:26.229464 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mpfr6" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" Dec 12 00:09:26 crc kubenswrapper[4917]: W1212 00:09:26.239938 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod73d83d51_dd4e_470a_af94_09ee620fe4da.slice/crio-e45608ac0524a523043c1035b636e17734bb011122e3d2c610cfc5c77f19ce7d WatchSource:0}: Error finding container e45608ac0524a523043c1035b636e17734bb011122e3d2c610cfc5c77f19ce7d: Status 404 returned error can't find the container with id e45608ac0524a523043c1035b636e17734bb011122e3d2c610cfc5c77f19ce7d Dec 12 00:09:26 crc kubenswrapper[4917]: I1212 00:09:26.444836 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 12 00:09:26 crc kubenswrapper[4917]: I1212 00:09:26.618126 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-f4t96"] Dec 12 00:09:26 crc kubenswrapper[4917]: I1212 00:09:26.688244 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8b1dedcdb2492aa7b67f6497f6d56119b4b9e4bb21fec9f13b4216dceb66544e"} Dec 12 00:09:26 crc kubenswrapper[4917]: I1212 00:09:26.689392 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"73d83d51-dd4e-470a-af94-09ee620fe4da","Type":"ContainerStarted","Data":"e45608ac0524a523043c1035b636e17734bb011122e3d2c610cfc5c77f19ce7d"} Dec 12 00:09:29 crc kubenswrapper[4917]: I1212 00:09:29.639782 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:09:29 crc kubenswrapper[4917]: I1212 00:09:29.640183 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:09:29 crc kubenswrapper[4917]: I1212 00:09:29.708763 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f4t96" event={"ID":"58f4853b-9736-4a03-8c86-1627cb51acbe","Type":"ContainerStarted","Data":"e93bf806b0af68ea5a27ebc5f6e06e11a82049b1749696a57a846cc62a0c05fd"} Dec 12 00:09:29 crc kubenswrapper[4917]: I1212 00:09:29.710072 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf203f00-76e7-4142-9d10-10e4d4ccf2d4","Type":"ContainerStarted","Data":"b256c440440cc7262ffb2f4cec86d5f058ded63b422f3841293a4f679b517911"} Dec 12 00:09:31 crc kubenswrapper[4917]: I1212 00:09:31.723684 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2666c30cbf6e38233415391aa5b81a8b50bd9a9dba6a014cc02257efc7bf6201"} Dec 12 00:09:31 crc kubenswrapper[4917]: I1212 00:09:31.728474 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"73d83d51-dd4e-470a-af94-09ee620fe4da","Type":"ContainerStarted","Data":"b00e8db58023eb7a3903a461cf5106bab50d864b8ba3a919fb568fc183cb9984"} Dec 12 00:09:31 crc kubenswrapper[4917]: I1212 00:09:31.730113 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f4t96" event={"ID":"58f4853b-9736-4a03-8c86-1627cb51acbe","Type":"ContainerStarted","Data":"c23d27406a0e6887ca1587f8577dbd25f8d34ac59bcd085d3c337a97b65ec132"} Dec 12 00:09:31 crc kubenswrapper[4917]: I1212 00:09:31.731839 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a5d5e842e77758248a58a887daed507486eac35edf9feaa75f3f6d5c6dd8612c"} Dec 12 00:09:31 crc kubenswrapper[4917]: I1212 00:09:31.735373 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8pzh" event={"ID":"c5e1046c-65b8-41b2-8bec-d7af367add71","Type":"ContainerStarted","Data":"6cbe1a2b581bbfcf40792ec87c08de4dd130282526271a1e01b118d270ba058c"} Dec 12 00:09:31 crc kubenswrapper[4917]: I1212 00:09:31.737194 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1ba53be26e7da59efc1b7cb27b6da02266d219671f278c802d08513aa81b2d55"} Dec 12 00:09:31 crc kubenswrapper[4917]: I1212 00:09:31.738847 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf203f00-76e7-4142-9d10-10e4d4ccf2d4","Type":"ContainerStarted","Data":"54eda4d476287de961af20f0b91f6cda307d50b7324dc1c508da8bb17a2edcba"} Dec 12 00:09:31 crc kubenswrapper[4917]: I1212 00:09:31.741015 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7vjh" event={"ID":"040517b1-b5e4-46e0-90c9-4fb4a7e5726f","Type":"ContainerStarted","Data":"ec9d77a8e9fc96f752104e1e456711645f2e81b53e29f9cc911f836ed8fab25e"} Dec 12 00:09:32 crc kubenswrapper[4917]: I1212 00:09:32.745583 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:09:32 crc kubenswrapper[4917]: I1212 00:09:32.825899 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=42.825840101 podStartE2EDuration="42.825840101s" podCreationTimestamp="2025-12-12 00:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:09:32.819472161 +0000 UTC m=+207.597272974" watchObservedRunningTime="2025-12-12 00:09:32.825840101 +0000 UTC m=+207.603640934" Dec 12 00:09:33 crc kubenswrapper[4917]: I1212 00:09:33.753298 4917 generic.go:334] "Generic (PLEG): container finished" podID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" containerID="ec9d77a8e9fc96f752104e1e456711645f2e81b53e29f9cc911f836ed8fab25e" exitCode=0 Dec 12 00:09:33 crc kubenswrapper[4917]: I1212 00:09:33.753402 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7vjh" event={"ID":"040517b1-b5e4-46e0-90c9-4fb4a7e5726f","Type":"ContainerDied","Data":"ec9d77a8e9fc96f752104e1e456711645f2e81b53e29f9cc911f836ed8fab25e"} Dec 12 00:09:33 crc kubenswrapper[4917]: I1212 00:09:33.755476 4917 generic.go:334] "Generic (PLEG): container finished" podID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerID="6cbe1a2b581bbfcf40792ec87c08de4dd130282526271a1e01b118d270ba058c" exitCode=0 Dec 12 00:09:33 crc kubenswrapper[4917]: I1212 00:09:33.755556 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8pzh" event={"ID":"c5e1046c-65b8-41b2-8bec-d7af367add71","Type":"ContainerDied","Data":"6cbe1a2b581bbfcf40792ec87c08de4dd130282526271a1e01b118d270ba058c"} Dec 12 00:09:34 crc kubenswrapper[4917]: I1212 00:09:34.761910 4917 generic.go:334] "Generic (PLEG): container finished" podID="73d83d51-dd4e-470a-af94-09ee620fe4da" containerID="b00e8db58023eb7a3903a461cf5106bab50d864b8ba3a919fb568fc183cb9984" exitCode=0 Dec 12 00:09:34 crc kubenswrapper[4917]: I1212 00:09:34.761980 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"73d83d51-dd4e-470a-af94-09ee620fe4da","Type":"ContainerDied","Data":"b00e8db58023eb7a3903a461cf5106bab50d864b8ba3a919fb568fc183cb9984"} Dec 12 00:09:35 crc kubenswrapper[4917]: I1212 00:09:35.767944 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f4t96" event={"ID":"58f4853b-9736-4a03-8c86-1627cb51acbe","Type":"ContainerStarted","Data":"1ad2e1e7c840efd32cfd12dadc635b61317b6a734b25637d9cea6eb3a587242b"} Dec 12 00:09:37 crc kubenswrapper[4917]: I1212 00:09:37.631724 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-f4t96" podStartSLOduration=188.631688429 podStartE2EDuration="3m8.631688429s" podCreationTimestamp="2025-12-12 00:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:09:35.785945331 +0000 UTC m=+210.563746144" watchObservedRunningTime="2025-12-12 00:09:37.631688429 +0000 UTC m=+212.409489262" Dec 12 00:09:39 crc kubenswrapper[4917]: I1212 00:09:39.180213 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 00:09:39 crc kubenswrapper[4917]: I1212 00:09:39.283996 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73d83d51-dd4e-470a-af94-09ee620fe4da-kubelet-dir\") pod \"73d83d51-dd4e-470a-af94-09ee620fe4da\" (UID: \"73d83d51-dd4e-470a-af94-09ee620fe4da\") " Dec 12 00:09:39 crc kubenswrapper[4917]: I1212 00:09:39.284082 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73d83d51-dd4e-470a-af94-09ee620fe4da-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "73d83d51-dd4e-470a-af94-09ee620fe4da" (UID: "73d83d51-dd4e-470a-af94-09ee620fe4da"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:09:39 crc kubenswrapper[4917]: I1212 00:09:39.284164 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73d83d51-dd4e-470a-af94-09ee620fe4da-kube-api-access\") pod \"73d83d51-dd4e-470a-af94-09ee620fe4da\" (UID: \"73d83d51-dd4e-470a-af94-09ee620fe4da\") " Dec 12 00:09:39 crc kubenswrapper[4917]: I1212 00:09:39.284397 4917 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73d83d51-dd4e-470a-af94-09ee620fe4da-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 00:09:39 crc kubenswrapper[4917]: I1212 00:09:39.291650 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d83d51-dd4e-470a-af94-09ee620fe4da-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "73d83d51-dd4e-470a-af94-09ee620fe4da" (UID: "73d83d51-dd4e-470a-af94-09ee620fe4da"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:09:39 crc kubenswrapper[4917]: I1212 00:09:39.386324 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73d83d51-dd4e-470a-af94-09ee620fe4da-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 00:09:39 crc kubenswrapper[4917]: I1212 00:09:39.790152 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"73d83d51-dd4e-470a-af94-09ee620fe4da","Type":"ContainerDied","Data":"e45608ac0524a523043c1035b636e17734bb011122e3d2c610cfc5c77f19ce7d"} Dec 12 00:09:39 crc kubenswrapper[4917]: I1212 00:09:39.790228 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45608ac0524a523043c1035b636e17734bb011122e3d2c610cfc5c77f19ce7d" Dec 12 00:09:39 crc kubenswrapper[4917]: I1212 00:09:39.790245 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 00:09:46 crc kubenswrapper[4917]: I1212 00:09:46.852126 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnjdr" event={"ID":"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2","Type":"ContainerStarted","Data":"6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366"} Dec 12 00:09:46 crc kubenswrapper[4917]: I1212 00:09:46.856640 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2zgs" event={"ID":"0e89623b-b8de-4d14-87bb-363bcbc0f859","Type":"ContainerStarted","Data":"f8a52fa9357382b47bdb5bb280e15a3181c0571f19101b7cfc70fe894c198b39"} Dec 12 00:09:46 crc kubenswrapper[4917]: I1212 00:09:46.859886 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7vjh" event={"ID":"040517b1-b5e4-46e0-90c9-4fb4a7e5726f","Type":"ContainerStarted","Data":"93c6666c53cf15d90e8cd8575c773429791da5ee2623d357979e39bae5877779"} Dec 12 00:09:48 crc kubenswrapper[4917]: I1212 00:09:48.872882 4917 generic.go:334] "Generic (PLEG): container finished" podID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerID="f8a52fa9357382b47bdb5bb280e15a3181c0571f19101b7cfc70fe894c198b39" exitCode=0 Dec 12 00:09:48 crc kubenswrapper[4917]: I1212 00:09:48.872958 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2zgs" event={"ID":"0e89623b-b8de-4d14-87bb-363bcbc0f859","Type":"ContainerDied","Data":"f8a52fa9357382b47bdb5bb280e15a3181c0571f19101b7cfc70fe894c198b39"} Dec 12 00:09:48 crc kubenswrapper[4917]: I1212 00:09:48.875892 4917 generic.go:334] "Generic (PLEG): container finished" podID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerID="6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366" exitCode=0 Dec 12 00:09:48 crc kubenswrapper[4917]: I1212 00:09:48.875966 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnjdr" event={"ID":"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2","Type":"ContainerDied","Data":"6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366"} Dec 12 00:09:48 crc kubenswrapper[4917]: I1212 00:09:48.906745 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r7vjh" podStartSLOduration=10.033932473 podStartE2EDuration="1m41.906723474s" podCreationTimestamp="2025-12-12 00:08:07 +0000 UTC" firstStartedPulling="2025-12-12 00:08:09.846326796 +0000 UTC m=+124.624127609" lastFinishedPulling="2025-12-12 00:09:41.719117787 +0000 UTC m=+216.496918610" observedRunningTime="2025-12-12 00:09:48.903032445 +0000 UTC m=+223.680833278" watchObservedRunningTime="2025-12-12 00:09:48.906723474 +0000 UTC m=+223.684524287" Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.926088 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8pzh" event={"ID":"c5e1046c-65b8-41b2-8bec-d7af367add71","Type":"ContainerStarted","Data":"e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb"} Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.928256 4917 generic.go:334] "Generic (PLEG): container finished" podID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerID="e79b45216d3cb854b1a1757c26bbbe85c815dac65796f4e0e72c067f4019f40f" exitCode=0 Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.928338 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqcm8" event={"ID":"97d0ec19-7dd2-4401-86bd-1e3e6074801c","Type":"ContainerDied","Data":"e79b45216d3cb854b1a1757c26bbbe85c815dac65796f4e0e72c067f4019f40f"} Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.930638 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnjdr" event={"ID":"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2","Type":"ContainerStarted","Data":"0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1"} Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.938635 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2zgs" event={"ID":"0e89623b-b8de-4d14-87bb-363bcbc0f859","Type":"ContainerStarted","Data":"c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485"} Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.947777 4917 generic.go:334] "Generic (PLEG): container finished" podID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" containerID="32c054ca5f893d2f90e919bfcd81d8708eefd23eb3759db12e0e67407e512a3d" exitCode=0 Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.947833 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5t8z" event={"ID":"f59fe677-4717-4fd4-8491-6f9d68ab5a54","Type":"ContainerDied","Data":"32c054ca5f893d2f90e919bfcd81d8708eefd23eb3759db12e0e67407e512a3d"} Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.950514 4917 generic.go:334] "Generic (PLEG): container finished" podID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerID="3e32486a4d20773519aa65e5d69f930bbd121afd29e929eb2dbdd95b3e1b2d74" exitCode=0 Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.950558 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7ml9" event={"ID":"c1fe8325-d2d0-4418-8c57-cdb509c32ce6","Type":"ContainerDied","Data":"3e32486a4d20773519aa65e5d69f930bbd121afd29e929eb2dbdd95b3e1b2d74"} Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.956579 4917 generic.go:334] "Generic (PLEG): container finished" podID="cbd23e21-81a0-4569-8eae-32be89672db5" containerID="9c089613a105fdd26f8a4bd7280bb6a873ce0de6adfcb5e5a94ccaf6eb247e14" exitCode=0 Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.956634 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfr6" event={"ID":"cbd23e21-81a0-4569-8eae-32be89672db5","Type":"ContainerDied","Data":"9c089613a105fdd26f8a4bd7280bb6a873ce0de6adfcb5e5a94ccaf6eb247e14"} Dec 12 00:09:52 crc kubenswrapper[4917]: I1212 00:09:52.960358 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h8pzh" podStartSLOduration=4.425886122 podStartE2EDuration="1m45.960333568s" podCreationTimestamp="2025-12-12 00:08:07 +0000 UTC" firstStartedPulling="2025-12-12 00:08:09.83095739 +0000 UTC m=+124.608758203" lastFinishedPulling="2025-12-12 00:09:51.365404836 +0000 UTC m=+226.143205649" observedRunningTime="2025-12-12 00:09:52.95146251 +0000 UTC m=+227.729263323" watchObservedRunningTime="2025-12-12 00:09:52.960333568 +0000 UTC m=+227.738134381" Dec 12 00:09:53 crc kubenswrapper[4917]: I1212 00:09:53.028196 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f2zgs" podStartSLOduration=3.709789919 podStartE2EDuration="1m42.028175429s" podCreationTimestamp="2025-12-12 00:08:11 +0000 UTC" firstStartedPulling="2025-12-12 00:08:13.088460612 +0000 UTC m=+127.866261425" lastFinishedPulling="2025-12-12 00:09:51.406846122 +0000 UTC m=+226.184646935" observedRunningTime="2025-12-12 00:09:53.026182496 +0000 UTC m=+227.803983339" watchObservedRunningTime="2025-12-12 00:09:53.028175429 +0000 UTC m=+227.805976262" Dec 12 00:09:53 crc kubenswrapper[4917]: I1212 00:09:53.054357 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bnjdr" podStartSLOduration=4.7318181 podStartE2EDuration="1m43.054330597s" podCreationTimestamp="2025-12-12 00:08:10 +0000 UTC" firstStartedPulling="2025-12-12 00:08:13.096012687 +0000 UTC m=+127.873813500" lastFinishedPulling="2025-12-12 00:09:51.418525184 +0000 UTC m=+226.196325997" observedRunningTime="2025-12-12 00:09:53.050231208 +0000 UTC m=+227.828032051" watchObservedRunningTime="2025-12-12 00:09:53.054330597 +0000 UTC m=+227.832131400" Dec 12 00:09:54 crc kubenswrapper[4917]: I1212 00:09:54.967767 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5t8z" event={"ID":"f59fe677-4717-4fd4-8491-6f9d68ab5a54","Type":"ContainerStarted","Data":"44caefb1808c222da7b98b41b9d70ede0469cdeb28bb8185d2ff00f170c3ff7f"} Dec 12 00:09:54 crc kubenswrapper[4917]: I1212 00:09:54.973014 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7ml9" event={"ID":"c1fe8325-d2d0-4418-8c57-cdb509c32ce6","Type":"ContainerStarted","Data":"c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c"} Dec 12 00:09:54 crc kubenswrapper[4917]: I1212 00:09:54.975781 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfr6" event={"ID":"cbd23e21-81a0-4569-8eae-32be89672db5","Type":"ContainerStarted","Data":"d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32"} Dec 12 00:09:54 crc kubenswrapper[4917]: I1212 00:09:54.987439 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f5t8z" podStartSLOduration=2.328045173 podStartE2EDuration="1m47.987419687s" podCreationTimestamp="2025-12-12 00:08:07 +0000 UTC" firstStartedPulling="2025-12-12 00:08:08.788442817 +0000 UTC m=+123.566243630" lastFinishedPulling="2025-12-12 00:09:54.447817331 +0000 UTC m=+229.225618144" observedRunningTime="2025-12-12 00:09:54.985484626 +0000 UTC m=+229.763285449" watchObservedRunningTime="2025-12-12 00:09:54.987419687 +0000 UTC m=+229.765220500" Dec 12 00:09:55 crc kubenswrapper[4917]: I1212 00:09:55.006336 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k7ml9" podStartSLOduration=2.374069486 podStartE2EDuration="1m46.006315062s" podCreationTimestamp="2025-12-12 00:08:09 +0000 UTC" firstStartedPulling="2025-12-12 00:08:10.91053558 +0000 UTC m=+125.688336393" lastFinishedPulling="2025-12-12 00:09:54.542781136 +0000 UTC m=+229.320581969" observedRunningTime="2025-12-12 00:09:55.005687265 +0000 UTC m=+229.783488088" watchObservedRunningTime="2025-12-12 00:09:55.006315062 +0000 UTC m=+229.784115875" Dec 12 00:09:55 crc kubenswrapper[4917]: I1212 00:09:55.029262 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mpfr6" podStartSLOduration=4.328466643 podStartE2EDuration="1m48.029224693s" podCreationTimestamp="2025-12-12 00:08:07 +0000 UTC" firstStartedPulling="2025-12-12 00:08:10.939856949 +0000 UTC m=+125.717657762" lastFinishedPulling="2025-12-12 00:09:54.640614999 +0000 UTC m=+229.418415812" observedRunningTime="2025-12-12 00:09:55.025152724 +0000 UTC m=+229.802953547" watchObservedRunningTime="2025-12-12 00:09:55.029224693 +0000 UTC m=+229.807025506" Dec 12 00:09:55 crc kubenswrapper[4917]: I1212 00:09:55.983950 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqcm8" event={"ID":"97d0ec19-7dd2-4401-86bd-1e3e6074801c","Type":"ContainerStarted","Data":"db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8"} Dec 12 00:09:56 crc kubenswrapper[4917]: I1212 00:09:56.006444 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qqcm8" podStartSLOduration=4.207723365 podStartE2EDuration="1m47.006422633s" podCreationTimestamp="2025-12-12 00:08:09 +0000 UTC" firstStartedPulling="2025-12-12 00:08:12.055470202 +0000 UTC m=+126.833271015" lastFinishedPulling="2025-12-12 00:09:54.85416947 +0000 UTC m=+229.631970283" observedRunningTime="2025-12-12 00:09:56.003218478 +0000 UTC m=+230.781019321" watchObservedRunningTime="2025-12-12 00:09:56.006422633 +0000 UTC m=+230.784223446" Dec 12 00:09:57 crc kubenswrapper[4917]: I1212 00:09:57.732261 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:09:57 crc kubenswrapper[4917]: I1212 00:09:57.732319 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:09:57 crc kubenswrapper[4917]: I1212 00:09:57.963324 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:09:57 crc kubenswrapper[4917]: I1212 00:09:57.965430 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:09:58 crc kubenswrapper[4917]: I1212 00:09:58.395502 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:09:58 crc kubenswrapper[4917]: I1212 00:09:58.395608 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:09:58 crc kubenswrapper[4917]: I1212 00:09:58.502586 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:09:58 crc kubenswrapper[4917]: I1212 00:09:58.502660 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:09:58 crc kubenswrapper[4917]: I1212 00:09:58.989671 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:09:58 crc kubenswrapper[4917]: I1212 00:09:58.990146 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:09:58 crc kubenswrapper[4917]: I1212 00:09:58.990398 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:09:58 crc kubenswrapper[4917]: I1212 00:09:58.991238 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:09:59 crc kubenswrapper[4917]: I1212 00:09:59.059630 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:09:59 crc kubenswrapper[4917]: I1212 00:09:59.062785 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:09:59 crc kubenswrapper[4917]: I1212 00:09:59.063843 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:09:59 crc kubenswrapper[4917]: I1212 00:09:59.639756 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:09:59 crc kubenswrapper[4917]: I1212 00:09:59.639863 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:09:59 crc kubenswrapper[4917]: I1212 00:09:59.639945 4917 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:09:59 crc kubenswrapper[4917]: I1212 00:09:59.641091 4917 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb"} pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 00:09:59 crc kubenswrapper[4917]: I1212 00:09:59.641350 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" containerID="cri-o://9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb" gracePeriod=600 Dec 12 00:09:59 crc kubenswrapper[4917]: I1212 00:09:59.736557 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:09:59 crc kubenswrapper[4917]: I1212 00:09:59.736598 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:09:59 crc kubenswrapper[4917]: I1212 00:09:59.791462 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:10:00 crc kubenswrapper[4917]: I1212 00:10:00.059738 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:10:00 crc kubenswrapper[4917]: I1212 00:10:00.431332 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:10:00 crc kubenswrapper[4917]: I1212 00:10:00.431396 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:10:00 crc kubenswrapper[4917]: I1212 00:10:00.473355 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:10:00 crc kubenswrapper[4917]: I1212 00:10:00.481990 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5dp8w"] Dec 12 00:10:01 crc kubenswrapper[4917]: I1212 00:10:01.030086 4917 generic.go:334] "Generic (PLEG): container finished" podID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerID="9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb" exitCode=0 Dec 12 00:10:01 crc kubenswrapper[4917]: I1212 00:10:01.030203 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerDied","Data":"9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb"} Dec 12 00:10:01 crc kubenswrapper[4917]: I1212 00:10:01.165387 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h8pzh"] Dec 12 00:10:01 crc kubenswrapper[4917]: I1212 00:10:01.165668 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h8pzh" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerName="registry-server" containerID="cri-o://e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb" gracePeriod=2 Dec 12 00:10:01 crc kubenswrapper[4917]: I1212 00:10:01.364783 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mpfr6"] Dec 12 00:10:01 crc kubenswrapper[4917]: I1212 00:10:01.365062 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mpfr6" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" containerName="registry-server" containerID="cri-o://d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32" gracePeriod=2 Dec 12 00:10:01 crc kubenswrapper[4917]: I1212 00:10:01.437108 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:10:01 crc kubenswrapper[4917]: I1212 00:10:01.437167 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:10:01 crc kubenswrapper[4917]: I1212 00:10:01.555010 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:10:01 crc kubenswrapper[4917]: I1212 00:10:01.555082 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:10:01 crc kubenswrapper[4917]: I1212 00:10:01.592471 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:10:02 crc kubenswrapper[4917]: I1212 00:10:02.073310 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:10:02 crc kubenswrapper[4917]: I1212 00:10:02.073882 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-qqcm8" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerName="registry-server" probeResult="failure" output=< Dec 12 00:10:02 crc kubenswrapper[4917]: timeout: failed to connect service ":50051" within 1s Dec 12 00:10:02 crc kubenswrapper[4917]: > Dec 12 00:10:02 crc kubenswrapper[4917]: I1212 00:10:02.499319 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bnjdr" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerName="registry-server" probeResult="failure" output=< Dec 12 00:10:02 crc kubenswrapper[4917]: timeout: failed to connect service ":50051" within 1s Dec 12 00:10:02 crc kubenswrapper[4917]: > Dec 12 00:10:05 crc kubenswrapper[4917]: I1212 00:10:05.243905 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 00:10:06 crc kubenswrapper[4917]: I1212 00:10:06.174905 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f2zgs"] Dec 12 00:10:06 crc kubenswrapper[4917]: I1212 00:10:06.175613 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f2zgs" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerName="registry-server" containerID="cri-o://c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485" gracePeriod=2 Dec 12 00:10:07 crc kubenswrapper[4917]: I1212 00:10:07.790592 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:10:08 crc kubenswrapper[4917]: E1212 00:10:08.395542 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb is running failed: container process not found" containerID="e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:08 crc kubenswrapper[4917]: E1212 00:10:08.395902 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb is running failed: container process not found" containerID="e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:08 crc kubenswrapper[4917]: E1212 00:10:08.396213 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb is running failed: container process not found" containerID="e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:08 crc kubenswrapper[4917]: E1212 00:10:08.396273 4917 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-h8pzh" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerName="registry-server" Dec 12 00:10:08 crc kubenswrapper[4917]: E1212 00:10:08.503870 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32 is running failed: container process not found" containerID="d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:08 crc kubenswrapper[4917]: E1212 00:10:08.504484 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32 is running failed: container process not found" containerID="d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:08 crc kubenswrapper[4917]: E1212 00:10:08.505018 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32 is running failed: container process not found" containerID="d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:08 crc kubenswrapper[4917]: E1212 00:10:08.505045 4917 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-mpfr6" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" containerName="registry-server" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.309944 4917 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 00:10:09 crc kubenswrapper[4917]: E1212 00:10:09.310279 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d83d51-dd4e-470a-af94-09ee620fe4da" containerName="pruner" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.310298 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d83d51-dd4e-470a-af94-09ee620fe4da" containerName="pruner" Dec 12 00:10:09 crc kubenswrapper[4917]: E1212 00:10:09.310327 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe29aecd-7402-47c3-a15a-d5a489c48b29" containerName="image-pruner" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.310338 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe29aecd-7402-47c3-a15a-d5a489c48b29" containerName="image-pruner" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.310520 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe29aecd-7402-47c3-a15a-d5a489c48b29" containerName="image-pruner" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.310543 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d83d51-dd4e-470a-af94-09ee620fe4da" containerName="pruner" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.311169 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.339964 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.507757 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.507832 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.507866 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.507890 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.507946 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.608766 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.608849 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.608878 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.608902 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.608957 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.609066 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.609537 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.609569 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.609597 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.609625 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: I1212 00:10:09.639625 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:10:09 crc kubenswrapper[4917]: W1212 00:10:09.660797 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-53e6ddc3b6030bbedca12834edc83ebf30db373d6b54df48a2e341fb01d6296a WatchSource:0}: Error finding container 53e6ddc3b6030bbedca12834edc83ebf30db373d6b54df48a2e341fb01d6296a: Status 404 returned error can't find the container with id 53e6ddc3b6030bbedca12834edc83ebf30db373d6b54df48a2e341fb01d6296a Dec 12 00:10:10 crc kubenswrapper[4917]: I1212 00:10:10.469742 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:10:10 crc kubenswrapper[4917]: I1212 00:10:10.965051 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqcm8"] Dec 12 00:10:11 crc kubenswrapper[4917]: I1212 00:10:11.475588 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:10:11 crc kubenswrapper[4917]: I1212 00:10:11.515301 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:10:11 crc kubenswrapper[4917]: E1212 00:10:11.555336 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485 is running failed: container process not found" containerID="c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:11 crc kubenswrapper[4917]: E1212 00:10:11.583479 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485 is running failed: container process not found" containerID="c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:11 crc kubenswrapper[4917]: E1212 00:10:11.585485 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485 is running failed: container process not found" containerID="c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:11 crc kubenswrapper[4917]: E1212 00:10:11.585546 4917 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-f2zgs" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerName="registry-server" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.332420 4917 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.332773 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009" gracePeriod=15 Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.332861 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2" gracePeriod=15 Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.332940 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5" gracePeriod=15 Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.332986 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5" gracePeriod=15 Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.333004 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621" gracePeriod=15 Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.334881 4917 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 00:10:12 crc kubenswrapper[4917]: E1212 00:10:12.335265 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.335284 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 12 00:10:12 crc kubenswrapper[4917]: E1212 00:10:12.335310 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.335323 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 12 00:10:12 crc kubenswrapper[4917]: E1212 00:10:12.335336 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.335349 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 00:10:12 crc kubenswrapper[4917]: E1212 00:10:12.335367 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.335379 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 12 00:10:12 crc kubenswrapper[4917]: E1212 00:10:12.335395 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.335408 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 00:10:12 crc kubenswrapper[4917]: E1212 00:10:12.335433 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.335444 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.335624 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.335874 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.335901 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.335926 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.335955 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 00:10:12 crc kubenswrapper[4917]: E1212 00:10:12.336165 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.336180 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.336889 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.450868 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.450940 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.450973 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.552383 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.552493 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.552545 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.552677 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.552695 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:12 crc kubenswrapper[4917]: I1212 00:10:12.552823 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:13 crc kubenswrapper[4917]: I1212 00:10:13.982474 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8pzh_c5e1046c-65b8-41b2-8bec-d7af367add71/registry-server/0.log" Dec 12 00:10:13 crc kubenswrapper[4917]: I1212 00:10:13.984122 4917 generic.go:334] "Generic (PLEG): container finished" podID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerID="e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb" exitCode=-1 Dec 12 00:10:13 crc kubenswrapper[4917]: I1212 00:10:13.984192 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8pzh" event={"ID":"c5e1046c-65b8-41b2-8bec-d7af367add71","Type":"ContainerDied","Data":"e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb"} Dec 12 00:10:14 crc kubenswrapper[4917]: I1212 00:10:14.992477 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"53e6ddc3b6030bbedca12834edc83ebf30db373d6b54df48a2e341fb01d6296a"} Dec 12 00:10:18 crc kubenswrapper[4917]: E1212 00:10:18.396085 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb is running failed: container process not found" containerID="e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:18 crc kubenswrapper[4917]: E1212 00:10:18.396904 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb is running failed: container process not found" containerID="e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:18 crc kubenswrapper[4917]: E1212 00:10:18.397174 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb is running failed: container process not found" containerID="e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:18 crc kubenswrapper[4917]: E1212 00:10:18.397198 4917 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-h8pzh" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerName="registry-server" Dec 12 00:10:18 crc kubenswrapper[4917]: E1212 00:10:18.397635 4917 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-h8pzh.18804f443b0cd8e2\": dial tcp 38.129.56.15:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-h8pzh.18804f443b0cd8e2 openshift-marketplace 29302 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-h8pzh,UID:c5e1046c-65b8-41b2-8bec-d7af367add71,APIVersion:v1,ResourceVersion:28379,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 00:10:08 +0000 UTC,LastTimestamp:2025-12-12 00:10:18.39722689 +0000 UTC m=+253.175027703,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 00:10:18 crc kubenswrapper[4917]: E1212 00:10:18.503359 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32 is running failed: container process not found" containerID="d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:18 crc kubenswrapper[4917]: E1212 00:10:18.504310 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32 is running failed: container process not found" containerID="d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:18 crc kubenswrapper[4917]: E1212 00:10:18.504823 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32 is running failed: container process not found" containerID="d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:18 crc kubenswrapper[4917]: E1212 00:10:18.504886 4917 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-mpfr6" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" containerName="registry-server" Dec 12 00:10:20 crc kubenswrapper[4917]: E1212 00:10:20.350990 4917 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:20 crc kubenswrapper[4917]: E1212 00:10:20.351702 4917 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:20 crc kubenswrapper[4917]: E1212 00:10:20.352579 4917 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:20 crc kubenswrapper[4917]: E1212 00:10:20.352933 4917 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:20 crc kubenswrapper[4917]: E1212 00:10:20.353237 4917 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:20 crc kubenswrapper[4917]: I1212 00:10:20.353283 4917 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 12 00:10:20 crc kubenswrapper[4917]: E1212 00:10:20.353582 4917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="200ms" Dec 12 00:10:20 crc kubenswrapper[4917]: E1212 00:10:20.554008 4917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="400ms" Dec 12 00:10:20 crc kubenswrapper[4917]: E1212 00:10:20.954982 4917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="800ms" Dec 12 00:10:21 crc kubenswrapper[4917]: E1212 00:10:21.555216 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485 is running failed: container process not found" containerID="c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:21 crc kubenswrapper[4917]: E1212 00:10:21.556236 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485 is running failed: container process not found" containerID="c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:21 crc kubenswrapper[4917]: E1212 00:10:21.556519 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485 is running failed: container process not found" containerID="c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:10:21 crc kubenswrapper[4917]: E1212 00:10:21.556560 4917 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-f2zgs" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerName="registry-server" Dec 12 00:10:21 crc kubenswrapper[4917]: E1212 00:10:21.756369 4917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="1.6s" Dec 12 00:10:23 crc kubenswrapper[4917]: E1212 00:10:23.357388 4917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="3.2s" Dec 12 00:10:23 crc kubenswrapper[4917]: I1212 00:10:23.940564 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f2zgs_0e89623b-b8de-4d14-87bb-363bcbc0f859/registry-server/0.log" Dec 12 00:10:23 crc kubenswrapper[4917]: I1212 00:10:23.942327 4917 generic.go:334] "Generic (PLEG): container finished" podID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerID="c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485" exitCode=-1 Dec 12 00:10:23 crc kubenswrapper[4917]: I1212 00:10:23.942523 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2zgs" event={"ID":"0e89623b-b8de-4d14-87bb-363bcbc0f859","Type":"ContainerDied","Data":"c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485"} Dec 12 00:10:23 crc kubenswrapper[4917]: I1212 00:10:23.945044 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpfr6_cbd23e21-81a0-4569-8eae-32be89672db5/registry-server/0.log" Dec 12 00:10:23 crc kubenswrapper[4917]: I1212 00:10:23.945790 4917 generic.go:334] "Generic (PLEG): container finished" podID="cbd23e21-81a0-4569-8eae-32be89672db5" containerID="d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32" exitCode=137 Dec 12 00:10:23 crc kubenswrapper[4917]: I1212 00:10:23.945964 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfr6" event={"ID":"cbd23e21-81a0-4569-8eae-32be89672db5","Type":"ContainerDied","Data":"d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32"} Dec 12 00:10:23 crc kubenswrapper[4917]: I1212 00:10:23.946220 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qqcm8" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerName="registry-server" containerID="cri-o://db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8" gracePeriod=2 Dec 12 00:10:23 crc kubenswrapper[4917]: I1212 00:10:23.947697 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.872181 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpfr6_cbd23e21-81a0-4569-8eae-32be89672db5/registry-server/0.log" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.873095 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.873988 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.874471 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.877766 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.878130 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.878299 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.878448 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.880125 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8pzh_c5e1046c-65b8-41b2-8bec-d7af367add71/registry-server/0.log" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.880773 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.881028 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.881326 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.881756 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.881951 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.953411 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2zgs" event={"ID":"0e89623b-b8de-4d14-87bb-363bcbc0f859","Type":"ContainerDied","Data":"7ce044c2870115babf80cac225887ccbc5f8d542db7591f6228ac6c45c3a4c10"} Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.953471 4917 scope.go:117] "RemoveContainer" containerID="c1d807814ac147ece5fb39693326d27639fa03b64b2badafe115441d80062485" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.953425 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2zgs" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.954200 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.954470 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.954671 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.954846 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.955686 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpfr6_cbd23e21-81a0-4569-8eae-32be89672db5/registry-server/0.log" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.956356 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfr6" event={"ID":"cbd23e21-81a0-4569-8eae-32be89672db5","Type":"ContainerDied","Data":"0e7a15eabb6d2957567d76fbd97f8df0ef186aaa810f0884a206dff56964c216"} Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.956441 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpfr6" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.956965 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.957163 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.957358 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.957565 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.959293 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.960541 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.961327 4917 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2" exitCode=0 Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.961376 4917 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5" exitCode=0 Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.961385 4917 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5" exitCode=0 Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.961394 4917 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621" exitCode=2 Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.961408 4917 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009" exitCode=0 Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.963908 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8pzh_c5e1046c-65b8-41b2-8bec-d7af367add71/registry-server/0.log" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.964813 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8pzh" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.964819 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8pzh" event={"ID":"c5e1046c-65b8-41b2-8bec-d7af367add71","Type":"ContainerDied","Data":"fd7efc75ac3ea5eeaaac0e86c26cea5c147c5614831b19ff6d8ebf31f20b5ed3"} Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.965748 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.965939 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.966764 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.967060 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.967467 4917 generic.go:334] "Generic (PLEG): container finished" podID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" containerID="54eda4d476287de961af20f0b91f6cda307d50b7324dc1c508da8bb17a2edcba" exitCode=0 Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.967541 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf203f00-76e7-4142-9d10-10e4d4ccf2d4","Type":"ContainerDied","Data":"54eda4d476287de961af20f0b91f6cda307d50b7324dc1c508da8bb17a2edcba"} Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.967921 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.968270 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.968491 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.968770 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.969016 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:24 crc kubenswrapper[4917]: I1212 00:10:24.969382 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1"} Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.035513 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-utilities\") pod \"c5e1046c-65b8-41b2-8bec-d7af367add71\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.035613 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-catalog-content\") pod \"cbd23e21-81a0-4569-8eae-32be89672db5\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.035691 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-utilities\") pod \"0e89623b-b8de-4d14-87bb-363bcbc0f859\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.035726 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-catalog-content\") pod \"c5e1046c-65b8-41b2-8bec-d7af367add71\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.035762 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-utilities\") pod \"cbd23e21-81a0-4569-8eae-32be89672db5\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.035790 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-catalog-content\") pod \"0e89623b-b8de-4d14-87bb-363bcbc0f859\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.035820 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzcqs\" (UniqueName: \"kubernetes.io/projected/0e89623b-b8de-4d14-87bb-363bcbc0f859-kube-api-access-pzcqs\") pod \"0e89623b-b8de-4d14-87bb-363bcbc0f859\" (UID: \"0e89623b-b8de-4d14-87bb-363bcbc0f859\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.035861 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5mhx\" (UniqueName: \"kubernetes.io/projected/c5e1046c-65b8-41b2-8bec-d7af367add71-kube-api-access-g5mhx\") pod \"c5e1046c-65b8-41b2-8bec-d7af367add71\" (UID: \"c5e1046c-65b8-41b2-8bec-d7af367add71\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.035932 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mddgd\" (UniqueName: \"kubernetes.io/projected/cbd23e21-81a0-4569-8eae-32be89672db5-kube-api-access-mddgd\") pod \"cbd23e21-81a0-4569-8eae-32be89672db5\" (UID: \"cbd23e21-81a0-4569-8eae-32be89672db5\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.037250 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-utilities" (OuterVolumeSpecName: "utilities") pod "c5e1046c-65b8-41b2-8bec-d7af367add71" (UID: "c5e1046c-65b8-41b2-8bec-d7af367add71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.037246 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-utilities" (OuterVolumeSpecName: "utilities") pod "0e89623b-b8de-4d14-87bb-363bcbc0f859" (UID: "0e89623b-b8de-4d14-87bb-363bcbc0f859"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.044217 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e1046c-65b8-41b2-8bec-d7af367add71-kube-api-access-g5mhx" (OuterVolumeSpecName: "kube-api-access-g5mhx") pod "c5e1046c-65b8-41b2-8bec-d7af367add71" (UID: "c5e1046c-65b8-41b2-8bec-d7af367add71"). InnerVolumeSpecName "kube-api-access-g5mhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.053626 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd23e21-81a0-4569-8eae-32be89672db5-kube-api-access-mddgd" (OuterVolumeSpecName: "kube-api-access-mddgd") pod "cbd23e21-81a0-4569-8eae-32be89672db5" (UID: "cbd23e21-81a0-4569-8eae-32be89672db5"). InnerVolumeSpecName "kube-api-access-mddgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.054709 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e89623b-b8de-4d14-87bb-363bcbc0f859-kube-api-access-pzcqs" (OuterVolumeSpecName: "kube-api-access-pzcqs") pod "0e89623b-b8de-4d14-87bb-363bcbc0f859" (UID: "0e89623b-b8de-4d14-87bb-363bcbc0f859"). InnerVolumeSpecName "kube-api-access-pzcqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.069928 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-utilities" (OuterVolumeSpecName: "utilities") pod "cbd23e21-81a0-4569-8eae-32be89672db5" (UID: "cbd23e21-81a0-4569-8eae-32be89672db5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.091304 4917 scope.go:117] "RemoveContainer" containerID="f8a52fa9357382b47bdb5bb280e15a3181c0571f19101b7cfc70fe894c198b39" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.110109 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5e1046c-65b8-41b2-8bec-d7af367add71" (UID: "c5e1046c-65b8-41b2-8bec-d7af367add71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.117890 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbd23e21-81a0-4569-8eae-32be89672db5" (UID: "cbd23e21-81a0-4569-8eae-32be89672db5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.137394 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.137440 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.137453 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzcqs\" (UniqueName: \"kubernetes.io/projected/0e89623b-b8de-4d14-87bb-363bcbc0f859-kube-api-access-pzcqs\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.137469 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5mhx\" (UniqueName: \"kubernetes.io/projected/c5e1046c-65b8-41b2-8bec-d7af367add71-kube-api-access-g5mhx\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.137481 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mddgd\" (UniqueName: \"kubernetes.io/projected/cbd23e21-81a0-4569-8eae-32be89672db5-kube-api-access-mddgd\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.137494 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e1046c-65b8-41b2-8bec-d7af367add71-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.137505 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd23e21-81a0-4569-8eae-32be89672db5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.137515 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.175507 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e89623b-b8de-4d14-87bb-363bcbc0f859" (UID: "0e89623b-b8de-4d14-87bb-363bcbc0f859"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.188517 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.190156 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.190961 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.191588 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.192170 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.192604 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.192873 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.193151 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.193385 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.193598 4917 scope.go:117] "RemoveContainer" containerID="8563ff0f2d4502978d6940b81821d6181991a63cd0bd318c2a35cec2e6cf5e19" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.209905 4917 scope.go:117] "RemoveContainer" containerID="d6883738d0e654c9b1c90defd51d64dafe09e56d5e02c8964333e16e64658a32" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.224886 4917 scope.go:117] "RemoveContainer" containerID="9c089613a105fdd26f8a4bd7280bb6a873ce0de6adfcb5e5a94ccaf6eb247e14" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.239069 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e89623b-b8de-4d14-87bb-363bcbc0f859-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.244540 4917 scope.go:117] "RemoveContainer" containerID="895b97b261e58ac606716f6c6bba98bbd8c41ad55d9d76a2806c6625c2b71f29" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.261106 4917 scope.go:117] "RemoveContainer" containerID="cd44168d8ccee1ae1563328585c6727a51dcb3ca977cba34ed8e68599d2c6308" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.273449 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.275802 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.276151 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.276457 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.277019 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.277229 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.283707 4917 scope.go:117] "RemoveContainer" containerID="e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.300548 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.300772 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.300970 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.301196 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.301458 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.301747 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.301984 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.302173 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.302431 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.302686 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.302938 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.303160 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.308477 4917 scope.go:117] "RemoveContainer" containerID="6cbe1a2b581bbfcf40792ec87c08de4dd130282526271a1e01b118d270ba058c" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.321973 4917 scope.go:117] "RemoveContainer" containerID="495e2dddc48b1d49fd4cd6479a1410576b00e7777f98542b3daa7f5f64f4cea3" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.339949 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.340044 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.340080 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.340300 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.340323 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.340360 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.340393 4917 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.441530 4917 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.441573 4917 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.519397 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" containerName="oauth-openshift" containerID="cri-o://71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605" gracePeriod=15 Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.578350 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.579338 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.579861 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.580396 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.580732 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.581063 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.582043 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.607855 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.608574 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.609078 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.609554 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.609965 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.610290 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.614080 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.744470 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g628c\" (UniqueName: \"kubernetes.io/projected/97d0ec19-7dd2-4401-86bd-1e3e6074801c-kube-api-access-g628c\") pod \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.744543 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-catalog-content\") pod \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.744588 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-utilities\") pod \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\" (UID: \"97d0ec19-7dd2-4401-86bd-1e3e6074801c\") " Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.747518 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-utilities" (OuterVolumeSpecName: "utilities") pod "97d0ec19-7dd2-4401-86bd-1e3e6074801c" (UID: "97d0ec19-7dd2-4401-86bd-1e3e6074801c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.760848 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d0ec19-7dd2-4401-86bd-1e3e6074801c-kube-api-access-g628c" (OuterVolumeSpecName: "kube-api-access-g628c") pod "97d0ec19-7dd2-4401-86bd-1e3e6074801c" (UID: "97d0ec19-7dd2-4401-86bd-1e3e6074801c"). InnerVolumeSpecName "kube-api-access-g628c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.773533 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97d0ec19-7dd2-4401-86bd-1e3e6074801c" (UID: "97d0ec19-7dd2-4401-86bd-1e3e6074801c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.845971 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g628c\" (UniqueName: \"kubernetes.io/projected/97d0ec19-7dd2-4401-86bd-1e3e6074801c-kube-api-access-g628c\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.846024 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.846037 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d0ec19-7dd2-4401-86bd-1e3e6074801c-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.927394 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.928109 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.928515 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.929035 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.929276 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.929513 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.929810 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.980828 4917 generic.go:334] "Generic (PLEG): container finished" podID="ae4dff24-ae34-4029-a0a1-30e9a379f091" containerID="71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605" exitCode=0 Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.980881 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.980913 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" event={"ID":"ae4dff24-ae34-4029-a0a1-30e9a379f091","Type":"ContainerDied","Data":"71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605"} Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.980981 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" event={"ID":"ae4dff24-ae34-4029-a0a1-30e9a379f091","Type":"ContainerDied","Data":"6c37b8aed5a19d211205cb563c5c6aec72a3af534859869c18e9dcf46815d170"} Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.981006 4917 scope.go:117] "RemoveContainer" containerID="71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.981762 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.982101 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.982496 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.982767 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.982977 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.983178 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.985048 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"6b7c442f4c3460c05c98aadf1500d6fe9cd23a4e533cf7d6262e8d9432e3dd4c"} Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.985469 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.985710 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.986008 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.986259 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.986511 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.986777 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.987012 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.988501 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.989175 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.989765 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.990006 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.990211 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.990588 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.990800 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.991018 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.991253 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.991471 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.991774 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.991997 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.992176 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.992356 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.992531 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.992782 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.992965 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.993186 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.995818 4917 generic.go:334] "Generic (PLEG): container finished" podID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerID="db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8" exitCode=0 Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.995872 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqcm8" event={"ID":"97d0ec19-7dd2-4401-86bd-1e3e6074801c","Type":"ContainerDied","Data":"db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8"} Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.995898 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqcm8" event={"ID":"97d0ec19-7dd2-4401-86bd-1e3e6074801c","Type":"ContainerDied","Data":"b5015474f9262f701f271443223f6b7798d3f81b4ae32fc20ad6c3363fda430a"} Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.995965 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqcm8" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.997627 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.997927 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.998144 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.998542 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.998720 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.998911 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.999417 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:25 crc kubenswrapper[4917]: I1212 00:10:25.999825 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.000039 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.000186 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.000328 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.000476 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.000673 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.000855 4917 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.001008 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.001175 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.001353 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.003175 4917 scope.go:117] "RemoveContainer" containerID="71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605" Dec 12 00:10:26 crc kubenswrapper[4917]: E1212 00:10:26.003540 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605\": container with ID starting with 71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605 not found: ID does not exist" containerID="71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.003572 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605"} err="failed to get container status \"71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605\": rpc error: code = NotFound desc = could not find container \"71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605\": container with ID starting with 71c6465587794442c59a9e0f13a08a7c3835f599f6417c481ad6527a76294605 not found: ID does not exist" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.003594 4917 scope.go:117] "RemoveContainer" containerID="7548befc32ede84e71b6b68d2371b9e567c942c0de83f82c149210b7f5cbe4b2" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.011361 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.011660 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.011983 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.012198 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.012408 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.012606 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.012912 4917 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.013155 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.013511 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.017326 4917 scope.go:117] "RemoveContainer" containerID="971587e4cee67e1d98a9dddafdf69fe17ea20bf95063d70e9ae26a650e93e0a5" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.035106 4917 scope.go:117] "RemoveContainer" containerID="48da97c501f86deff3ac86b304c0f4e36aa032e0030e04013a617a099fc7afb5" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049217 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-ocp-branding-template\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049262 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-dir\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049329 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdcvl\" (UniqueName: \"kubernetes.io/projected/ae4dff24-ae34-4029-a0a1-30e9a379f091-kube-api-access-jdcvl\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049365 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-cliconfig\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049419 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-serving-cert\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049465 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-idp-0-file-data\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049486 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-error\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049538 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-trusted-ca-bundle\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049530 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049558 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-provider-selection\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049690 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-login\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049735 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-policies\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049772 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-session\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049814 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-router-certs\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.049841 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-service-ca\") pod \"ae4dff24-ae34-4029-a0a1-30e9a379f091\" (UID: \"ae4dff24-ae34-4029-a0a1-30e9a379f091\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.050010 4917 scope.go:117] "RemoveContainer" containerID="b54070d7b62003baf8859261be716dcc36a8682e0c63a555f924f853fcb26621" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.050656 4917 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.051107 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: E1212 00:10:26.051333 4917 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-h8pzh.18804f443b0cd8e2\": dial tcp 38.129.56.15:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-h8pzh.18804f443b0cd8e2 openshift-marketplace 29302 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-h8pzh,UID:c5e1046c-65b8-41b2-8bec-d7af367add71,APIVersion:v1,ResourceVersion:28379,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of e21788132dceedccc1edf6866f4c25e58a6b33718698502f52d1cb495dd820bb is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 00:10:08 +0000 UTC,LastTimestamp:2025-12-12 00:10:18.39722689 +0000 UTC m=+253.175027703,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.051550 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.051668 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.051866 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.054261 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.054802 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4dff24-ae34-4029-a0a1-30e9a379f091-kube-api-access-jdcvl" (OuterVolumeSpecName: "kube-api-access-jdcvl") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "kube-api-access-jdcvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.060149 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.060228 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.060350 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.060619 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.061008 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.061624 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.064240 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ae4dff24-ae34-4029-a0a1-30e9a379f091" (UID: "ae4dff24-ae34-4029-a0a1-30e9a379f091"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.091786 4917 scope.go:117] "RemoveContainer" containerID="28cb1cbb992e6c369e9459aaef6e6d0f2fb01d2486131afbd66f402843013009" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.107113 4917 scope.go:117] "RemoveContainer" containerID="2e27fb3c7433abd8c25065cc7eb9eba17c756ef8a9614d3d242f4e751cc59667" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.124335 4917 scope.go:117] "RemoveContainer" containerID="db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.149107 4917 scope.go:117] "RemoveContainer" containerID="e79b45216d3cb854b1a1757c26bbbe85c815dac65796f4e0e72c067f4019f40f" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151728 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151770 4917 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151784 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151799 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151808 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151823 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151834 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdcvl\" (UniqueName: \"kubernetes.io/projected/ae4dff24-ae34-4029-a0a1-30e9a379f091-kube-api-access-jdcvl\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151849 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151859 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151871 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151881 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151891 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.151901 4917 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae4dff24-ae34-4029-a0a1-30e9a379f091-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.166442 4917 scope.go:117] "RemoveContainer" containerID="c4347b122b83e84475d46fe99d0ac0711959ac0b57e9282d88a75c0e0ee84d0b" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.169796 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.170535 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.170748 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.171164 4917 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.171846 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.172455 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.172678 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.172865 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.173693 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.174248 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.184628 4917 scope.go:117] "RemoveContainer" containerID="db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8" Dec 12 00:10:26 crc kubenswrapper[4917]: E1212 00:10:26.185133 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8\": container with ID starting with db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8 not found: ID does not exist" containerID="db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.185167 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8"} err="failed to get container status \"db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8\": rpc error: code = NotFound desc = could not find container \"db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8\": container with ID starting with db26e69363400b410e612fcd8dfaccf3baf0856a96c4a10ba92d682ff274b8b8 not found: ID does not exist" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.185196 4917 scope.go:117] "RemoveContainer" containerID="e79b45216d3cb854b1a1757c26bbbe85c815dac65796f4e0e72c067f4019f40f" Dec 12 00:10:26 crc kubenswrapper[4917]: E1212 00:10:26.185607 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79b45216d3cb854b1a1757c26bbbe85c815dac65796f4e0e72c067f4019f40f\": container with ID starting with e79b45216d3cb854b1a1757c26bbbe85c815dac65796f4e0e72c067f4019f40f not found: ID does not exist" containerID="e79b45216d3cb854b1a1757c26bbbe85c815dac65796f4e0e72c067f4019f40f" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.185629 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79b45216d3cb854b1a1757c26bbbe85c815dac65796f4e0e72c067f4019f40f"} err="failed to get container status \"e79b45216d3cb854b1a1757c26bbbe85c815dac65796f4e0e72c067f4019f40f\": rpc error: code = NotFound desc = could not find container \"e79b45216d3cb854b1a1757c26bbbe85c815dac65796f4e0e72c067f4019f40f\": container with ID starting with e79b45216d3cb854b1a1757c26bbbe85c815dac65796f4e0e72c067f4019f40f not found: ID does not exist" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.185678 4917 scope.go:117] "RemoveContainer" containerID="c4347b122b83e84475d46fe99d0ac0711959ac0b57e9282d88a75c0e0ee84d0b" Dec 12 00:10:26 crc kubenswrapper[4917]: E1212 00:10:26.185930 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4347b122b83e84475d46fe99d0ac0711959ac0b57e9282d88a75c0e0ee84d0b\": container with ID starting with c4347b122b83e84475d46fe99d0ac0711959ac0b57e9282d88a75c0e0ee84d0b not found: ID does not exist" containerID="c4347b122b83e84475d46fe99d0ac0711959ac0b57e9282d88a75c0e0ee84d0b" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.185952 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4347b122b83e84475d46fe99d0ac0711959ac0b57e9282d88a75c0e0ee84d0b"} err="failed to get container status \"c4347b122b83e84475d46fe99d0ac0711959ac0b57e9282d88a75c0e0ee84d0b\": rpc error: code = NotFound desc = could not find container \"c4347b122b83e84475d46fe99d0ac0711959ac0b57e9282d88a75c0e0ee84d0b\": container with ID starting with c4347b122b83e84475d46fe99d0ac0711959ac0b57e9282d88a75c0e0ee84d0b not found: ID does not exist" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.294079 4917 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.294444 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.294950 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.295219 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.295440 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.295687 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.295975 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.296254 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.296466 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.354278 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kubelet-dir\") pod \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.354332 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-var-lock\") pod \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.354360 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kube-api-access\") pod \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\" (UID: \"cf203f00-76e7-4142-9d10-10e4d4ccf2d4\") " Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.354398 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf203f00-76e7-4142-9d10-10e4d4ccf2d4" (UID: "cf203f00-76e7-4142-9d10-10e4d4ccf2d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.354450 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-var-lock" (OuterVolumeSpecName: "var-lock") pod "cf203f00-76e7-4142-9d10-10e4d4ccf2d4" (UID: "cf203f00-76e7-4142-9d10-10e4d4ccf2d4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.354891 4917 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.354917 4917 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.357571 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf203f00-76e7-4142-9d10-10e4d4ccf2d4" (UID: "cf203f00-76e7-4142-9d10-10e4d4ccf2d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:10:26 crc kubenswrapper[4917]: I1212 00:10:26.455883 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf203f00-76e7-4142-9d10-10e4d4ccf2d4-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 00:10:26 crc kubenswrapper[4917]: E1212 00:10:26.559387 4917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.15:6443: connect: connection refused" interval="6.4s" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.005430 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.005865 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf203f00-76e7-4142-9d10-10e4d4ccf2d4","Type":"ContainerDied","Data":"b256c440440cc7262ffb2f4cec86d5f058ded63b422f3841293a4f679b517911"} Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.005936 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b256c440440cc7262ffb2f4cec86d5f058ded63b422f3841293a4f679b517911" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.012499 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.012553 4917 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235" exitCode=1 Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.012611 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235"} Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.013206 4917 scope.go:117] "RemoveContainer" containerID="83c0f6c96c0d68a2116fddb6a7fb927485bab068e4c0a4cb146aab96585fc235" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.013472 4917 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.014486 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.014874 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.015087 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.015283 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.015511 4917 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.015751 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.016028 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.016255 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.016522 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.018702 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.018959 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.019188 4917 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.019432 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.019697 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.019916 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.020153 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.020397 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.020617 4917 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.020874 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.601514 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.602293 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.602830 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.603205 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.603592 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.603868 4917 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.604127 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.604381 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.604615 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.604905 4917 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.605197 4917 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.617617 4917 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84ca9710-d96a-4794-a7a2-d7440ab355e1" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.617669 4917 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84ca9710-d96a-4794-a7a2-d7440ab355e1" Dec 12 00:10:27 crc kubenswrapper[4917]: E1212 00:10:27.618183 4917 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:27 crc kubenswrapper[4917]: I1212 00:10:27.618757 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:27 crc kubenswrapper[4917]: W1212 00:10:27.635503 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-80a1479635b4303a96b03e3ebe80edbbd2db8fcc4ce504674d6d6139dd81fedb WatchSource:0}: Error finding container 80a1479635b4303a96b03e3ebe80edbbd2db8fcc4ce504674d6d6139dd81fedb: Status 404 returned error can't find the container with id 80a1479635b4303a96b03e3ebe80edbbd2db8fcc4ce504674d6d6139dd81fedb Dec 12 00:10:28 crc kubenswrapper[4917]: I1212 00:10:28.037975 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 12 00:10:28 crc kubenswrapper[4917]: I1212 00:10:28.038354 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5093a2b6259034f1ba6866fedb5aaff23a9cbf41fcd7e7acf63af81d403c5aaf"} Dec 12 00:10:28 crc kubenswrapper[4917]: I1212 00:10:28.039493 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"80a1479635b4303a96b03e3ebe80edbbd2db8fcc4ce504674d6d6139dd81fedb"} Dec 12 00:10:28 crc kubenswrapper[4917]: I1212 00:10:28.859428 4917 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.047304 4917 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c928902f39c2716ee86888275b996eee7a36edfff193fad04f63b2922377da47" exitCode=0 Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.047423 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c928902f39c2716ee86888275b996eee7a36edfff193fad04f63b2922377da47"} Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.047596 4917 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84ca9710-d96a-4794-a7a2-d7440ab355e1" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.047612 4917 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84ca9710-d96a-4794-a7a2-d7440ab355e1" Dec 12 00:10:29 crc kubenswrapper[4917]: E1212 00:10:29.048042 4917 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.048171 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.048629 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.048953 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.049199 4917 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.049422 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.049725 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.050187 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.050435 4917 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.050775 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.051062 4917 status_manager.go:851] "Failed to get status for pod" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" pod="openshift-marketplace/community-operators-h8pzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-h8pzh\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.051317 4917 status_manager.go:851] "Failed to get status for pod" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" pod="openshift-marketplace/redhat-operators-f2zgs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2zgs\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.051579 4917 status_manager.go:851] "Failed to get status for pod" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.051798 4917 status_manager.go:851] "Failed to get status for pod" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" pod="openshift-marketplace/certified-operators-mpfr6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mpfr6\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.051982 4917 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.052174 4917 status_manager.go:851] "Failed to get status for pod" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-ktvtt\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.052361 4917 status_manager.go:851] "Failed to get status for pod" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" pod="openshift-authentication/oauth-openshift-558db77b4-5dp8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5dp8w\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.052576 4917 status_manager.go:851] "Failed to get status for pod" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" pod="openshift-marketplace/redhat-marketplace-qqcm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qqcm8\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.052790 4917 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.15:6443: connect: connection refused" Dec 12 00:10:29 crc kubenswrapper[4917]: I1212 00:10:29.733283 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:10:30 crc kubenswrapper[4917]: I1212 00:10:30.062945 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6761a1763cfa7c1dfb8594b44327f51468a44a87a835947891283eb75a9e57c9"} Dec 12 00:10:30 crc kubenswrapper[4917]: I1212 00:10:30.062993 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1907f11d87dce8a18f0599de76efd631eae4447fa0387360cbfcfe73f324181a"} Dec 12 00:10:30 crc kubenswrapper[4917]: I1212 00:10:30.063006 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"881919a785611ba121cdfffee3e40fba6d9fea6e18b0cadf06a2b50ee552c835"} Dec 12 00:10:31 crc kubenswrapper[4917]: I1212 00:10:31.072318 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6ae74279e6019296e2bcf7b825187c89b89b608815c5aae8737b8f253d2d8e2f"} Dec 12 00:10:31 crc kubenswrapper[4917]: I1212 00:10:31.072747 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b9399951ebb0ff785ef8352af8f04be791e9fb9a6efeb6a6989bb19385a632c9"} Dec 12 00:10:31 crc kubenswrapper[4917]: I1212 00:10:31.072825 4917 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84ca9710-d96a-4794-a7a2-d7440ab355e1" Dec 12 00:10:31 crc kubenswrapper[4917]: I1212 00:10:31.072859 4917 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84ca9710-d96a-4794-a7a2-d7440ab355e1" Dec 12 00:10:32 crc kubenswrapper[4917]: I1212 00:10:32.619267 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:32 crc kubenswrapper[4917]: I1212 00:10:32.619639 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:32 crc kubenswrapper[4917]: I1212 00:10:32.626883 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:33 crc kubenswrapper[4917]: I1212 00:10:33.985974 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:10:33 crc kubenswrapper[4917]: I1212 00:10:33.992291 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:10:36 crc kubenswrapper[4917]: I1212 00:10:36.086217 4917 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:36 crc kubenswrapper[4917]: I1212 00:10:36.309566 4917 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5b65c66f-148b-481e-b962-76af0f0e03ad" Dec 12 00:10:37 crc kubenswrapper[4917]: I1212 00:10:37.107597 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:37 crc kubenswrapper[4917]: I1212 00:10:37.107666 4917 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84ca9710-d96a-4794-a7a2-d7440ab355e1" Dec 12 00:10:37 crc kubenswrapper[4917]: I1212 00:10:37.107703 4917 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84ca9710-d96a-4794-a7a2-d7440ab355e1" Dec 12 00:10:37 crc kubenswrapper[4917]: I1212 00:10:37.112780 4917 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5b65c66f-148b-481e-b962-76af0f0e03ad" Dec 12 00:10:37 crc kubenswrapper[4917]: I1212 00:10:37.113347 4917 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://881919a785611ba121cdfffee3e40fba6d9fea6e18b0cadf06a2b50ee552c835" Dec 12 00:10:37 crc kubenswrapper[4917]: I1212 00:10:37.113371 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:38 crc kubenswrapper[4917]: I1212 00:10:38.112727 4917 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84ca9710-d96a-4794-a7a2-d7440ab355e1" Dec 12 00:10:38 crc kubenswrapper[4917]: I1212 00:10:38.113117 4917 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="84ca9710-d96a-4794-a7a2-d7440ab355e1" Dec 12 00:10:38 crc kubenswrapper[4917]: I1212 00:10:38.124360 4917 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5b65c66f-148b-481e-b962-76af0f0e03ad" Dec 12 00:10:39 crc kubenswrapper[4917]: I1212 00:10:39.737163 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 00:10:45 crc kubenswrapper[4917]: I1212 00:10:45.667918 4917 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:46.518182 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:46.563071 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:47.043400 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:47.327425 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:47.631761 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:47.695083 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:47.855663 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:47.875108 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.092483 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.127345 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.176190 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.177961 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.227831 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.257066 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.322401 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.359962 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.404225 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.523806 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.568339 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.603954 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.629401 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.655477 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.788047 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.806834 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.881029 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.926738 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 12 00:10:48 crc kubenswrapper[4917]: I1212 00:10:48.944177 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.254912 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.413757 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.475492 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.539709 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.599343 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.645702 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.759268 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.769485 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.800024 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.803223 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.928009 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 00:10:49 crc kubenswrapper[4917]: I1212 00:10:49.962418 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.072200 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.118468 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.213985 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.220212 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.241910 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.264518 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.297212 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.360440 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.383284 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.474053 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.497718 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.516368 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.687474 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.749428 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.793468 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.886291 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 00:10:50 crc kubenswrapper[4917]: I1212 00:10:50.887724 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 12 00:10:51 crc kubenswrapper[4917]: I1212 00:10:51.140796 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 00:10:51 crc kubenswrapper[4917]: I1212 00:10:51.318992 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 12 00:10:51 crc kubenswrapper[4917]: I1212 00:10:51.425165 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 12 00:10:51 crc kubenswrapper[4917]: I1212 00:10:51.526742 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 12 00:10:51 crc kubenswrapper[4917]: I1212 00:10:51.739934 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 12 00:10:51 crc kubenswrapper[4917]: I1212 00:10:51.828817 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 12 00:10:51 crc kubenswrapper[4917]: I1212 00:10:51.830187 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 12 00:10:51 crc kubenswrapper[4917]: I1212 00:10:51.849775 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.010997 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.083511 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.141551 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.169260 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.170093 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.235878 4917 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.240987 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.240946621 podStartE2EDuration="43.240946621s" podCreationTimestamp="2025-12-12 00:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:10:36.230930194 +0000 UTC m=+271.008731017" watchObservedRunningTime="2025-12-12 00:10:52.240946621 +0000 UTC m=+287.018747524" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.245849 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h8pzh","openshift-marketplace/certified-operators-mpfr6","openshift-marketplace/redhat-marketplace-qqcm8","openshift-marketplace/redhat-operators-f2zgs","openshift-authentication/oauth-openshift-558db77b4-5dp8w","openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.245988 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.250825 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.266904 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.266873066 podStartE2EDuration="16.266873066s" podCreationTimestamp="2025-12-12 00:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:10:52.262767842 +0000 UTC m=+287.040568675" watchObservedRunningTime="2025-12-12 00:10:52.266873066 +0000 UTC m=+287.044673899" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.336850 4917 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.494132 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.545682 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.552585 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.610567 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.627998 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.634468 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.654864 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.689181 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.698627 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.715890 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.717528 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.814324 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.825162 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.903528 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.924294 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.994726 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 12 00:10:52 crc kubenswrapper[4917]: I1212 00:10:52.996842 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.060615 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.123315 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.161498 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.168782 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.245473 4917 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.251099 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.273533 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.294688 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.309927 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.452112 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.607843 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" path="/var/lib/kubelet/pods/0e89623b-b8de-4d14-87bb-363bcbc0f859/volumes" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.608485 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" path="/var/lib/kubelet/pods/97d0ec19-7dd2-4401-86bd-1e3e6074801c/volumes" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.609193 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" path="/var/lib/kubelet/pods/ae4dff24-ae34-4029-a0a1-30e9a379f091/volumes" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.610407 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" path="/var/lib/kubelet/pods/c5e1046c-65b8-41b2-8bec-d7af367add71/volumes" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.611012 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" path="/var/lib/kubelet/pods/cbd23e21-81a0-4569-8eae-32be89672db5/volumes" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.661868 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.728585 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.739680 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.793615 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.887970 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 12 00:10:53 crc kubenswrapper[4917]: I1212 00:10:53.981738 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.001858 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.037414 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.048824 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.074321 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.091062 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.153533 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.261831 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.310484 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.327833 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.395265 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.410439 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.520114 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.525573 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.549860 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.580371 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.761517 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.769805 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 12 00:10:54 crc kubenswrapper[4917]: I1212 00:10:54.840884 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.026732 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.051247 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.157856 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.218323 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.327189 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.338627 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.517985 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.548154 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.700452 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.723131 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.797333 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.809968 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.873840 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 12 00:10:55 crc kubenswrapper[4917]: I1212 00:10:55.874888 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.022158 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.067953 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.141317 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.191526 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.249332 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.346378 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.380281 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.498466 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.509301 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.510612 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.539345 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.632145 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.638599 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.661092 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.670663 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.711318 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.763896 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.823670 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.879397 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.891907 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.901290 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 12 00:10:56 crc kubenswrapper[4917]: I1212 00:10:56.906091 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.068221 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.069036 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.083989 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.255975 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.256147 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.432697 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.533071 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.545664 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.647679 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5"] Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.647990 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerName="extract-utilities" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648006 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerName="extract-utilities" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648016 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerName="extract-utilities" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648023 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerName="extract-utilities" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648031 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerName="extract-content" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648037 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerName="extract-content" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648046 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" containerName="installer" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648054 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" containerName="installer" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648062 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerName="extract-content" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648067 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerName="extract-content" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648077 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648083 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648093 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" containerName="extract-utilities" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648098 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" containerName="extract-utilities" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648107 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerName="extract-content" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648113 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerName="extract-content" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648120 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerName="extract-utilities" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648126 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerName="extract-utilities" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648134 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648139 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648150 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" containerName="oauth-openshift" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648155 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" containerName="oauth-openshift" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648163 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" containerName="extract-content" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648168 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" containerName="extract-content" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648178 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648183 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: E1212 00:10:57.648195 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648201 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648305 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae4dff24-ae34-4029-a0a1-30e9a379f091" containerName="oauth-openshift" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648318 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd23e21-81a0-4569-8eae-32be89672db5" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648327 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e1046c-65b8-41b2-8bec-d7af367add71" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648338 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf203f00-76e7-4142-9d10-10e4d4ccf2d4" containerName="installer" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648346 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d0ec19-7dd2-4401-86bd-1e3e6074801c" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648355 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e89623b-b8de-4d14-87bb-363bcbc0f859" containerName="registry-server" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.648931 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.651717 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.653394 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.653696 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.653718 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.654045 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.654128 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.676442 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.677534 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.677830 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.677987 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.678211 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.678415 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.702561 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5"] Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.703492 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.708330 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.710000 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.806957 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807409 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-session\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807428 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807446 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-template-login\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807495 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2bdb1863-e77b-4d9c-828b-413c1a385dda-audit-dir\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807524 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7t58\" (UniqueName: \"kubernetes.io/projected/2bdb1863-e77b-4d9c-828b-413c1a385dda-kube-api-access-k7t58\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807540 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-template-error\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807564 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807595 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807633 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807692 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807733 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807773 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.807861 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-audit-policies\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.860976 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908539 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-template-login\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908602 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2bdb1863-e77b-4d9c-828b-413c1a385dda-audit-dir\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908629 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7t58\" (UniqueName: \"kubernetes.io/projected/2bdb1863-e77b-4d9c-828b-413c1a385dda-kube-api-access-k7t58\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908671 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-template-error\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908707 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908748 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908776 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908803 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908826 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908854 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908886 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-audit-policies\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908911 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908941 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.908966 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-session\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.909767 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2bdb1863-e77b-4d9c-828b-413c1a385dda-audit-dir\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.910207 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.910589 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-audit-policies\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.915218 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.916342 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.916343 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.916368 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.917858 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.918050 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.918394 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.918588 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-system-session\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.918639 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-template-login\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.921037 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2bdb1863-e77b-4d9c-828b-413c1a385dda-v4-0-config-user-template-error\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.925919 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.929082 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7t58\" (UniqueName: \"kubernetes.io/projected/2bdb1863-e77b-4d9c-828b-413c1a385dda-kube-api-access-k7t58\") pod \"oauth-openshift-6fdcc7ff8c-p2gm5\" (UID: \"2bdb1863-e77b-4d9c-828b-413c1a385dda\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:57 crc kubenswrapper[4917]: I1212 00:10:57.989925 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.010917 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.015913 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.039523 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.082308 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.162694 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.210883 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5"] Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.240522 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" event={"ID":"2bdb1863-e77b-4d9c-828b-413c1a385dda","Type":"ContainerStarted","Data":"fb6d095b790386d778a0688ebac72e49bd35f3eb7b17fa84dafa5242bd249aaf"} Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.286532 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.374539 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.494190 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.504370 4917 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.531942 4917 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.532226 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1" gracePeriod=5 Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.575630 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.617976 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.648479 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.672262 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.722463 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.812446 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.971247 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.983088 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 12 00:10:58 crc kubenswrapper[4917]: I1212 00:10:58.995394 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.012038 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.034682 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.177729 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.247605 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" event={"ID":"2bdb1863-e77b-4d9c-828b-413c1a385dda","Type":"ContainerStarted","Data":"e9005f5f35da5912797381abadeadc91f3f118eb3e39a2292c1e5c114786edfd"} Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.247870 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.253397 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.268218 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-p2gm5" podStartSLOduration=59.268196701 podStartE2EDuration="59.268196701s" podCreationTimestamp="2025-12-12 00:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:10:59.267377818 +0000 UTC m=+294.045178631" watchObservedRunningTime="2025-12-12 00:10:59.268196701 +0000 UTC m=+294.045997524" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.339963 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.373214 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.452794 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.561519 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.564464 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 12 00:10:59 crc kubenswrapper[4917]: I1212 00:10:59.571307 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 12 00:11:00 crc kubenswrapper[4917]: I1212 00:11:00.084722 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 12 00:11:00 crc kubenswrapper[4917]: I1212 00:11:00.424130 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 12 00:11:00 crc kubenswrapper[4917]: I1212 00:11:00.458138 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 12 00:11:00 crc kubenswrapper[4917]: I1212 00:11:00.484061 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 12 00:11:00 crc kubenswrapper[4917]: I1212 00:11:00.568058 4917 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 12 00:11:00 crc kubenswrapper[4917]: I1212 00:11:00.580856 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 12 00:11:00 crc kubenswrapper[4917]: I1212 00:11:00.801064 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 12 00:11:00 crc kubenswrapper[4917]: I1212 00:11:00.823448 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 12 00:11:01 crc kubenswrapper[4917]: I1212 00:11:01.139620 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 12 00:11:01 crc kubenswrapper[4917]: I1212 00:11:01.156342 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 12 00:11:01 crc kubenswrapper[4917]: I1212 00:11:01.272418 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 12 00:11:01 crc kubenswrapper[4917]: I1212 00:11:01.331240 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 12 00:11:01 crc kubenswrapper[4917]: I1212 00:11:01.791347 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.113204 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.113284 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.215066 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.215547 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.215594 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.215615 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.215186 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.215743 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.215798 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.215868 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.215957 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.216257 4917 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.216286 4917 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.216300 4917 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.216313 4917 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.224486 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.280215 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.280310 4917 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1" exitCode=137 Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.280414 4917 scope.go:117] "RemoveContainer" containerID="12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.280485 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.304346 4917 scope.go:117] "RemoveContainer" containerID="12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1" Dec 12 00:11:04 crc kubenswrapper[4917]: E1212 00:11:04.305164 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1\": container with ID starting with 12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1 not found: ID does not exist" containerID="12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.305224 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1"} err="failed to get container status \"12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1\": rpc error: code = NotFound desc = could not find container \"12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1\": container with ID starting with 12d69e0a3043f4acb9ab3e546608d2dfc492f6891f6386a10dd628c4d63361c1 not found: ID does not exist" Dec 12 00:11:04 crc kubenswrapper[4917]: I1212 00:11:04.317514 4917 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:05 crc kubenswrapper[4917]: I1212 00:11:05.608511 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 12 00:11:05 crc kubenswrapper[4917]: I1212 00:11:05.608821 4917 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 12 00:11:05 crc kubenswrapper[4917]: I1212 00:11:05.619965 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 00:11:05 crc kubenswrapper[4917]: I1212 00:11:05.620012 4917 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4c1eff42-1f80-42a2-aeb7-b465d0d54df8" Dec 12 00:11:05 crc kubenswrapper[4917]: I1212 00:11:05.624178 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 00:11:05 crc kubenswrapper[4917]: I1212 00:11:05.624223 4917 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4c1eff42-1f80-42a2-aeb7-b465d0d54df8" Dec 12 00:11:10 crc kubenswrapper[4917]: I1212 00:11:10.799926 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.060675 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h6z88"] Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.060973 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" podUID="c31d8f10-5195-45b7-9809-19edb34d404b" containerName="controller-manager" containerID="cri-o://9b733cc20fa73f535259a24095a8e301e57bc0c118d0d0ec8ba9201ae1ff0152" gracePeriod=30 Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.161695 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct"] Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.162051 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" podUID="4b029a79-9426-4acc-af09-c11c8216777c" containerName="route-controller-manager" containerID="cri-o://018aaddea4c628f1a7d3769b1562c602faf9c6f7433d4886a4677a851f1672eb" gracePeriod=30 Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.333209 4917 generic.go:334] "Generic (PLEG): container finished" podID="4b029a79-9426-4acc-af09-c11c8216777c" containerID="018aaddea4c628f1a7d3769b1562c602faf9c6f7433d4886a4677a851f1672eb" exitCode=0 Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.333275 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" event={"ID":"4b029a79-9426-4acc-af09-c11c8216777c","Type":"ContainerDied","Data":"018aaddea4c628f1a7d3769b1562c602faf9c6f7433d4886a4677a851f1672eb"} Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.337321 4917 generic.go:334] "Generic (PLEG): container finished" podID="c31d8f10-5195-45b7-9809-19edb34d404b" containerID="9b733cc20fa73f535259a24095a8e301e57bc0c118d0d0ec8ba9201ae1ff0152" exitCode=0 Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.337361 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" event={"ID":"c31d8f10-5195-45b7-9809-19edb34d404b","Type":"ContainerDied","Data":"9b733cc20fa73f535259a24095a8e301e57bc0c118d0d0ec8ba9201ae1ff0152"} Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.606256 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.608526 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.746872 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-client-ca\") pod \"c31d8f10-5195-45b7-9809-19edb34d404b\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.746998 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-proxy-ca-bundles\") pod \"c31d8f10-5195-45b7-9809-19edb34d404b\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.747035 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-client-ca\") pod \"4b029a79-9426-4acc-af09-c11c8216777c\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.747091 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-config\") pod \"c31d8f10-5195-45b7-9809-19edb34d404b\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.747170 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zpk9\" (UniqueName: \"kubernetes.io/projected/4b029a79-9426-4acc-af09-c11c8216777c-kube-api-access-5zpk9\") pod \"4b029a79-9426-4acc-af09-c11c8216777c\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.747214 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b029a79-9426-4acc-af09-c11c8216777c-serving-cert\") pod \"4b029a79-9426-4acc-af09-c11c8216777c\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.747304 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c31d8f10-5195-45b7-9809-19edb34d404b-serving-cert\") pod \"c31d8f10-5195-45b7-9809-19edb34d404b\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.747337 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95hdn\" (UniqueName: \"kubernetes.io/projected/c31d8f10-5195-45b7-9809-19edb34d404b-kube-api-access-95hdn\") pod \"c31d8f10-5195-45b7-9809-19edb34d404b\" (UID: \"c31d8f10-5195-45b7-9809-19edb34d404b\") " Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.747371 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-config\") pod \"4b029a79-9426-4acc-af09-c11c8216777c\" (UID: \"4b029a79-9426-4acc-af09-c11c8216777c\") " Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.748357 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c31d8f10-5195-45b7-9809-19edb34d404b" (UID: "c31d8f10-5195-45b7-9809-19edb34d404b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.749038 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-client-ca" (OuterVolumeSpecName: "client-ca") pod "c31d8f10-5195-45b7-9809-19edb34d404b" (UID: "c31d8f10-5195-45b7-9809-19edb34d404b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.748712 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-config" (OuterVolumeSpecName: "config") pod "c31d8f10-5195-45b7-9809-19edb34d404b" (UID: "c31d8f10-5195-45b7-9809-19edb34d404b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.749112 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-client-ca" (OuterVolumeSpecName: "client-ca") pod "4b029a79-9426-4acc-af09-c11c8216777c" (UID: "4b029a79-9426-4acc-af09-c11c8216777c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.749199 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-config" (OuterVolumeSpecName: "config") pod "4b029a79-9426-4acc-af09-c11c8216777c" (UID: "4b029a79-9426-4acc-af09-c11c8216777c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.754984 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b029a79-9426-4acc-af09-c11c8216777c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4b029a79-9426-4acc-af09-c11c8216777c" (UID: "4b029a79-9426-4acc-af09-c11c8216777c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.755142 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31d8f10-5195-45b7-9809-19edb34d404b-kube-api-access-95hdn" (OuterVolumeSpecName: "kube-api-access-95hdn") pod "c31d8f10-5195-45b7-9809-19edb34d404b" (UID: "c31d8f10-5195-45b7-9809-19edb34d404b"). InnerVolumeSpecName "kube-api-access-95hdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.755269 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31d8f10-5195-45b7-9809-19edb34d404b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c31d8f10-5195-45b7-9809-19edb34d404b" (UID: "c31d8f10-5195-45b7-9809-19edb34d404b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.755277 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b029a79-9426-4acc-af09-c11c8216777c-kube-api-access-5zpk9" (OuterVolumeSpecName: "kube-api-access-5zpk9") pod "4b029a79-9426-4acc-af09-c11c8216777c" (UID: "4b029a79-9426-4acc-af09-c11c8216777c"). InnerVolumeSpecName "kube-api-access-5zpk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.849097 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c31d8f10-5195-45b7-9809-19edb34d404b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.849146 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95hdn\" (UniqueName: \"kubernetes.io/projected/c31d8f10-5195-45b7-9809-19edb34d404b-kube-api-access-95hdn\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.849161 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.849173 4917 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.849184 4917 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.849198 4917 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b029a79-9426-4acc-af09-c11c8216777c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.849209 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31d8f10-5195-45b7-9809-19edb34d404b-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.849219 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zpk9\" (UniqueName: \"kubernetes.io/projected/4b029a79-9426-4acc-af09-c11c8216777c-kube-api-access-5zpk9\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:13 crc kubenswrapper[4917]: I1212 00:11:13.849229 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b029a79-9426-4acc-af09-c11c8216777c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.351380 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" event={"ID":"c31d8f10-5195-45b7-9809-19edb34d404b","Type":"ContainerDied","Data":"74af3995641cf5abda3383f7d209eb79e0c5367fd7b9c224ec915a84582d1d47"} Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.351433 4917 scope.go:117] "RemoveContainer" containerID="9b733cc20fa73f535259a24095a8e301e57bc0c118d0d0ec8ba9201ae1ff0152" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.351534 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h6z88" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.355629 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" event={"ID":"4b029a79-9426-4acc-af09-c11c8216777c","Type":"ContainerDied","Data":"ba457d469c2336538d0dc57bad52c11dd265182877ca05820e03b67b3b25d87f"} Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.355733 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.380820 4917 scope.go:117] "RemoveContainer" containerID="018aaddea4c628f1a7d3769b1562c602faf9c6f7433d4886a4677a851f1672eb" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.388354 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85b9785f75-5r7b2"] Dec 12 00:11:14 crc kubenswrapper[4917]: E1212 00:11:14.389240 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31d8f10-5195-45b7-9809-19edb34d404b" containerName="controller-manager" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.389268 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31d8f10-5195-45b7-9809-19edb34d404b" containerName="controller-manager" Dec 12 00:11:14 crc kubenswrapper[4917]: E1212 00:11:14.389286 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b029a79-9426-4acc-af09-c11c8216777c" containerName="route-controller-manager" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.389296 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b029a79-9426-4acc-af09-c11c8216777c" containerName="route-controller-manager" Dec 12 00:11:14 crc kubenswrapper[4917]: E1212 00:11:14.389320 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.389329 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.389454 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.389471 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31d8f10-5195-45b7-9809-19edb34d404b" containerName="controller-manager" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.389487 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b029a79-9426-4acc-af09-c11c8216777c" containerName="route-controller-manager" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.390028 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.395816 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.396038 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.398868 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.398896 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.399140 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.401870 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.402053 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.408119 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h6z88"] Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.416662 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h6z88"] Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.434423 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85b9785f75-5r7b2"] Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.440107 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8"] Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.442749 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.449466 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.450114 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.450154 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.450437 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.451059 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8"] Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.451195 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.459890 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct"] Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.460472 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.467321 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8zct"] Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.557429 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bae74f88-085a-4fba-9b05-1c7c27cbbd66-serving-cert\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.557485 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-client-ca\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.557528 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwg46\" (UniqueName: \"kubernetes.io/projected/22f82d99-f615-429e-a390-0df242a9dc1e-kube-api-access-kwg46\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.557557 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-proxy-ca-bundles\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.557604 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-client-ca\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.557699 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-config\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.557727 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-config\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.557863 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22f82d99-f615-429e-a390-0df242a9dc1e-serving-cert\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.557896 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfdcx\" (UniqueName: \"kubernetes.io/projected/bae74f88-085a-4fba-9b05-1c7c27cbbd66-kube-api-access-wfdcx\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.657475 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.659154 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22f82d99-f615-429e-a390-0df242a9dc1e-serving-cert\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.659220 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfdcx\" (UniqueName: \"kubernetes.io/projected/bae74f88-085a-4fba-9b05-1c7c27cbbd66-kube-api-access-wfdcx\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.659395 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-client-ca\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.659437 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bae74f88-085a-4fba-9b05-1c7c27cbbd66-serving-cert\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.659473 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwg46\" (UniqueName: \"kubernetes.io/projected/22f82d99-f615-429e-a390-0df242a9dc1e-kube-api-access-kwg46\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.659500 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-proxy-ca-bundles\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.659549 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-client-ca\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.659609 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-config\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.659668 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-config\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.663285 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-config\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.671977 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-client-ca\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.672093 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-config\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.672625 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-client-ca\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.672819 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22f82d99-f615-429e-a390-0df242a9dc1e-serving-cert\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.672841 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bae74f88-085a-4fba-9b05-1c7c27cbbd66-serving-cert\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.672988 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-proxy-ca-bundles\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.675272 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfdcx\" (UniqueName: \"kubernetes.io/projected/bae74f88-085a-4fba-9b05-1c7c27cbbd66-kube-api-access-wfdcx\") pod \"route-controller-manager-77457b7c4c-2gfc8\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.675386 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwg46\" (UniqueName: \"kubernetes.io/projected/22f82d99-f615-429e-a390-0df242a9dc1e-kube-api-access-kwg46\") pod \"controller-manager-85b9785f75-5r7b2\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.711497 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.745048 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.758069 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.918722 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85b9785f75-5r7b2"] Dec 12 00:11:14 crc kubenswrapper[4917]: W1212 00:11:14.928772 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22f82d99_f615_429e_a390_0df242a9dc1e.slice/crio-8de7631bdeff89d5900b987817fcf7230f1f424ec3d25ca586c7006ed0b28d5c WatchSource:0}: Error finding container 8de7631bdeff89d5900b987817fcf7230f1f424ec3d25ca586c7006ed0b28d5c: Status 404 returned error can't find the container with id 8de7631bdeff89d5900b987817fcf7230f1f424ec3d25ca586c7006ed0b28d5c Dec 12 00:11:14 crc kubenswrapper[4917]: I1212 00:11:14.971109 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8"] Dec 12 00:11:14 crc kubenswrapper[4917]: W1212 00:11:14.972227 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbae74f88_085a_4fba_9b05_1c7c27cbbd66.slice/crio-a81f44230c6492620b3e96786174326c794d5a07beaa4107a31de1a2f8102b0f WatchSource:0}: Error finding container a81f44230c6492620b3e96786174326c794d5a07beaa4107a31de1a2f8102b0f: Status 404 returned error can't find the container with id a81f44230c6492620b3e96786174326c794d5a07beaa4107a31de1a2f8102b0f Dec 12 00:11:15 crc kubenswrapper[4917]: I1212 00:11:15.362395 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" event={"ID":"bae74f88-085a-4fba-9b05-1c7c27cbbd66","Type":"ContainerStarted","Data":"a81f44230c6492620b3e96786174326c794d5a07beaa4107a31de1a2f8102b0f"} Dec 12 00:11:15 crc kubenswrapper[4917]: I1212 00:11:15.363417 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" event={"ID":"22f82d99-f615-429e-a390-0df242a9dc1e","Type":"ContainerStarted","Data":"8de7631bdeff89d5900b987817fcf7230f1f424ec3d25ca586c7006ed0b28d5c"} Dec 12 00:11:15 crc kubenswrapper[4917]: I1212 00:11:15.610866 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b029a79-9426-4acc-af09-c11c8216777c" path="/var/lib/kubelet/pods/4b029a79-9426-4acc-af09-c11c8216777c/volumes" Dec 12 00:11:15 crc kubenswrapper[4917]: I1212 00:11:15.611655 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c31d8f10-5195-45b7-9809-19edb34d404b" path="/var/lib/kubelet/pods/c31d8f10-5195-45b7-9809-19edb34d404b/volumes" Dec 12 00:11:16 crc kubenswrapper[4917]: I1212 00:11:16.371233 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" event={"ID":"bae74f88-085a-4fba-9b05-1c7c27cbbd66","Type":"ContainerStarted","Data":"ab0553b730e9b8a30ddf71c55728cda67de898394ad442b94099abe3c8bb15b3"} Dec 12 00:11:16 crc kubenswrapper[4917]: I1212 00:11:16.371588 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:16 crc kubenswrapper[4917]: I1212 00:11:16.372978 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" event={"ID":"22f82d99-f615-429e-a390-0df242a9dc1e","Type":"ContainerStarted","Data":"0a53b8523767415f9d703a5623f7d0464d0a30530736498ec770d6739a8910d2"} Dec 12 00:11:16 crc kubenswrapper[4917]: I1212 00:11:16.373163 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:16 crc kubenswrapper[4917]: I1212 00:11:16.376870 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:16 crc kubenswrapper[4917]: I1212 00:11:16.377296 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:16 crc kubenswrapper[4917]: I1212 00:11:16.391876 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" podStartSLOduration=2.391856222 podStartE2EDuration="2.391856222s" podCreationTimestamp="2025-12-12 00:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:11:16.387264325 +0000 UTC m=+311.165065138" watchObservedRunningTime="2025-12-12 00:11:16.391856222 +0000 UTC m=+311.169657035" Dec 12 00:11:16 crc kubenswrapper[4917]: I1212 00:11:16.406411 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" podStartSLOduration=2.406390623 podStartE2EDuration="2.406390623s" podCreationTimestamp="2025-12-12 00:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:11:16.405540359 +0000 UTC m=+311.183341172" watchObservedRunningTime="2025-12-12 00:11:16.406390623 +0000 UTC m=+311.184191436" Dec 12 00:11:16 crc kubenswrapper[4917]: I1212 00:11:16.622766 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 12 00:11:16 crc kubenswrapper[4917]: I1212 00:11:16.860295 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 12 00:11:17 crc kubenswrapper[4917]: I1212 00:11:17.334629 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 12 00:11:17 crc kubenswrapper[4917]: I1212 00:11:17.475260 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 12 00:11:17 crc kubenswrapper[4917]: I1212 00:11:17.558303 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 12 00:11:18 crc kubenswrapper[4917]: I1212 00:11:18.952938 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 12 00:11:20 crc kubenswrapper[4917]: I1212 00:11:20.226114 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 12 00:11:20 crc kubenswrapper[4917]: I1212 00:11:20.615391 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 12 00:11:22 crc kubenswrapper[4917]: I1212 00:11:22.752688 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 12 00:11:23 crc kubenswrapper[4917]: I1212 00:11:23.643859 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 12 00:11:23 crc kubenswrapper[4917]: I1212 00:11:23.705722 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 12 00:11:24 crc kubenswrapper[4917]: I1212 00:11:24.560673 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 12 00:11:26 crc kubenswrapper[4917]: I1212 00:11:26.550961 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 12 00:11:27 crc kubenswrapper[4917]: I1212 00:11:27.015397 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 12 00:11:28 crc kubenswrapper[4917]: I1212 00:11:28.928564 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 12 00:11:29 crc kubenswrapper[4917]: I1212 00:11:29.433051 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 12 00:11:32 crc kubenswrapper[4917]: I1212 00:11:32.919438 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 12 00:11:32 crc kubenswrapper[4917]: I1212 00:11:32.954264 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 12 00:11:32 crc kubenswrapper[4917]: I1212 00:11:32.964296 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8"] Dec 12 00:11:32 crc kubenswrapper[4917]: I1212 00:11:32.964566 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" podUID="bae74f88-085a-4fba-9b05-1c7c27cbbd66" containerName="route-controller-manager" containerID="cri-o://ab0553b730e9b8a30ddf71c55728cda67de898394ad442b94099abe3c8bb15b3" gracePeriod=30 Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.159448 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.487116 4917 generic.go:334] "Generic (PLEG): container finished" podID="bae74f88-085a-4fba-9b05-1c7c27cbbd66" containerID="ab0553b730e9b8a30ddf71c55728cda67de898394ad442b94099abe3c8bb15b3" exitCode=0 Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.487172 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" event={"ID":"bae74f88-085a-4fba-9b05-1c7c27cbbd66","Type":"ContainerDied","Data":"ab0553b730e9b8a30ddf71c55728cda67de898394ad442b94099abe3c8bb15b3"} Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.597971 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.631275 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd"] Dec 12 00:11:34 crc kubenswrapper[4917]: E1212 00:11:34.632010 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae74f88-085a-4fba-9b05-1c7c27cbbd66" containerName="route-controller-manager" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.632031 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae74f88-085a-4fba-9b05-1c7c27cbbd66" containerName="route-controller-manager" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.633490 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae74f88-085a-4fba-9b05-1c7c27cbbd66" containerName="route-controller-manager" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.635618 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.643754 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd"] Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.717660 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-client-ca\") pod \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.718323 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bae74f88-085a-4fba-9b05-1c7c27cbbd66-serving-cert\") pod \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.718497 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-config\") pod \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.718689 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfdcx\" (UniqueName: \"kubernetes.io/projected/bae74f88-085a-4fba-9b05-1c7c27cbbd66-kube-api-access-wfdcx\") pod \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\" (UID: \"bae74f88-085a-4fba-9b05-1c7c27cbbd66\") " Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.718433 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-client-ca" (OuterVolumeSpecName: "client-ca") pod "bae74f88-085a-4fba-9b05-1c7c27cbbd66" (UID: "bae74f88-085a-4fba-9b05-1c7c27cbbd66"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.719234 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-client-ca\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.719362 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82w7l\" (UniqueName: \"kubernetes.io/projected/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-kube-api-access-82w7l\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.719488 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-config\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.719632 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-serving-cert\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.719811 4917 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.719685 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-config" (OuterVolumeSpecName: "config") pod "bae74f88-085a-4fba-9b05-1c7c27cbbd66" (UID: "bae74f88-085a-4fba-9b05-1c7c27cbbd66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.725908 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae74f88-085a-4fba-9b05-1c7c27cbbd66-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bae74f88-085a-4fba-9b05-1c7c27cbbd66" (UID: "bae74f88-085a-4fba-9b05-1c7c27cbbd66"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.726792 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae74f88-085a-4fba-9b05-1c7c27cbbd66-kube-api-access-wfdcx" (OuterVolumeSpecName: "kube-api-access-wfdcx") pod "bae74f88-085a-4fba-9b05-1c7c27cbbd66" (UID: "bae74f88-085a-4fba-9b05-1c7c27cbbd66"). InnerVolumeSpecName "kube-api-access-wfdcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.821358 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-client-ca\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.821414 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82w7l\" (UniqueName: \"kubernetes.io/projected/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-kube-api-access-82w7l\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.821441 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-config\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.821472 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-serving-cert\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.821514 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bae74f88-085a-4fba-9b05-1c7c27cbbd66-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.821611 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae74f88-085a-4fba-9b05-1c7c27cbbd66-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.821621 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfdcx\" (UniqueName: \"kubernetes.io/projected/bae74f88-085a-4fba-9b05-1c7c27cbbd66-kube-api-access-wfdcx\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.823072 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-client-ca\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.823583 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-config\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.826123 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-serving-cert\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.841000 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82w7l\" (UniqueName: \"kubernetes.io/projected/3d7be2ca-e1fa-495a-bfd5-654cca46c0df-kube-api-access-82w7l\") pod \"route-controller-manager-67cd78cfc7-cv9hd\" (UID: \"3d7be2ca-e1fa-495a-bfd5-654cca46c0df\") " pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.966818 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:34 crc kubenswrapper[4917]: I1212 00:11:34.981929 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 12 00:11:35 crc kubenswrapper[4917]: I1212 00:11:35.357662 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd"] Dec 12 00:11:35 crc kubenswrapper[4917]: W1212 00:11:35.369268 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d7be2ca_e1fa_495a_bfd5_654cca46c0df.slice/crio-441198d1425059ccc06e74b84c83613e91427ad980651b4b057076bdcbc8707c WatchSource:0}: Error finding container 441198d1425059ccc06e74b84c83613e91427ad980651b4b057076bdcbc8707c: Status 404 returned error can't find the container with id 441198d1425059ccc06e74b84c83613e91427ad980651b4b057076bdcbc8707c Dec 12 00:11:35 crc kubenswrapper[4917]: I1212 00:11:35.494185 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" event={"ID":"bae74f88-085a-4fba-9b05-1c7c27cbbd66","Type":"ContainerDied","Data":"a81f44230c6492620b3e96786174326c794d5a07beaa4107a31de1a2f8102b0f"} Dec 12 00:11:35 crc kubenswrapper[4917]: I1212 00:11:35.494230 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8" Dec 12 00:11:35 crc kubenswrapper[4917]: I1212 00:11:35.494543 4917 scope.go:117] "RemoveContainer" containerID="ab0553b730e9b8a30ddf71c55728cda67de898394ad442b94099abe3c8bb15b3" Dec 12 00:11:35 crc kubenswrapper[4917]: I1212 00:11:35.495366 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" event={"ID":"3d7be2ca-e1fa-495a-bfd5-654cca46c0df","Type":"ContainerStarted","Data":"441198d1425059ccc06e74b84c83613e91427ad980651b4b057076bdcbc8707c"} Dec 12 00:11:35 crc kubenswrapper[4917]: I1212 00:11:35.535150 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8"] Dec 12 00:11:35 crc kubenswrapper[4917]: I1212 00:11:35.535694 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77457b7c4c-2gfc8"] Dec 12 00:11:35 crc kubenswrapper[4917]: I1212 00:11:35.610184 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae74f88-085a-4fba-9b05-1c7c27cbbd66" path="/var/lib/kubelet/pods/bae74f88-085a-4fba-9b05-1c7c27cbbd66/volumes" Dec 12 00:11:36 crc kubenswrapper[4917]: I1212 00:11:36.174766 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 12 00:11:37 crc kubenswrapper[4917]: I1212 00:11:37.508764 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" event={"ID":"3d7be2ca-e1fa-495a-bfd5-654cca46c0df","Type":"ContainerStarted","Data":"1ff34f0364ef70c97f6faa5661457a3ff50a017ccad2dd21c98dc45f57818499"} Dec 12 00:11:37 crc kubenswrapper[4917]: I1212 00:11:37.509151 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:37 crc kubenswrapper[4917]: I1212 00:11:37.515901 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" Dec 12 00:11:37 crc kubenswrapper[4917]: I1212 00:11:37.526392 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" podStartSLOduration=5.52637787 podStartE2EDuration="5.52637787s" podCreationTimestamp="2025-12-12 00:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:11:37.525305561 +0000 UTC m=+332.303106384" watchObservedRunningTime="2025-12-12 00:11:37.52637787 +0000 UTC m=+332.304178673" Dec 12 00:11:40 crc kubenswrapper[4917]: I1212 00:11:40.376720 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 12 00:11:40 crc kubenswrapper[4917]: I1212 00:11:40.608373 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 12 00:11:40 crc kubenswrapper[4917]: I1212 00:11:40.755266 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 12 00:11:41 crc kubenswrapper[4917]: I1212 00:11:41.265080 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 12 00:11:45 crc kubenswrapper[4917]: I1212 00:11:45.936478 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x86v4"] Dec 12 00:11:45 crc kubenswrapper[4917]: I1212 00:11:45.937735 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:45 crc kubenswrapper[4917]: I1212 00:11:45.963359 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x86v4"] Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.078869 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf403a35-cd42-4a62-84da-e94b42c457d1-registry-certificates\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.078933 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf403a35-cd42-4a62-84da-e94b42c457d1-registry-tls\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.078980 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf403a35-cd42-4a62-84da-e94b42c457d1-trusted-ca\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.079029 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf403a35-cd42-4a62-84da-e94b42c457d1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.079052 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf403a35-cd42-4a62-84da-e94b42c457d1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.079086 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.079124 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlwqj\" (UniqueName: \"kubernetes.io/projected/cf403a35-cd42-4a62-84da-e94b42c457d1-kube-api-access-mlwqj\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.079175 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf403a35-cd42-4a62-84da-e94b42c457d1-bound-sa-token\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.102554 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.180146 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf403a35-cd42-4a62-84da-e94b42c457d1-bound-sa-token\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.180199 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf403a35-cd42-4a62-84da-e94b42c457d1-registry-certificates\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.180218 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf403a35-cd42-4a62-84da-e94b42c457d1-registry-tls\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.180250 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf403a35-cd42-4a62-84da-e94b42c457d1-trusted-ca\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.180281 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf403a35-cd42-4a62-84da-e94b42c457d1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.180298 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf403a35-cd42-4a62-84da-e94b42c457d1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.180373 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlwqj\" (UniqueName: \"kubernetes.io/projected/cf403a35-cd42-4a62-84da-e94b42c457d1-kube-api-access-mlwqj\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.181584 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf403a35-cd42-4a62-84da-e94b42c457d1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.182075 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf403a35-cd42-4a62-84da-e94b42c457d1-trusted-ca\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.182348 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf403a35-cd42-4a62-84da-e94b42c457d1-registry-certificates\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.188432 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf403a35-cd42-4a62-84da-e94b42c457d1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.190416 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf403a35-cd42-4a62-84da-e94b42c457d1-registry-tls\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.201671 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlwqj\" (UniqueName: \"kubernetes.io/projected/cf403a35-cd42-4a62-84da-e94b42c457d1-kube-api-access-mlwqj\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.203074 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf403a35-cd42-4a62-84da-e94b42c457d1-bound-sa-token\") pod \"image-registry-66df7c8f76-x86v4\" (UID: \"cf403a35-cd42-4a62-84da-e94b42c457d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.254566 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:46 crc kubenswrapper[4917]: I1212 00:11:46.665053 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x86v4"] Dec 12 00:11:47 crc kubenswrapper[4917]: I1212 00:11:47.564405 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" event={"ID":"cf403a35-cd42-4a62-84da-e94b42c457d1","Type":"ContainerStarted","Data":"4d015c20725e7dc73a2eb2e98d2d0b5ee0baf7ada33f1dc00cc5ab170ef0a278"} Dec 12 00:11:50 crc kubenswrapper[4917]: I1212 00:11:50.168432 4917 patch_prober.go:28] interesting pod/console-f9d7485db-8frb7 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 00:11:50 crc kubenswrapper[4917]: I1212 00:11:50.168517 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-8frb7" podUID="234c3156-bf4c-464d-8ee4-957474f3bb82" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 00:11:53 crc kubenswrapper[4917]: I1212 00:11:53.003570 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85b9785f75-5r7b2"] Dec 12 00:11:53 crc kubenswrapper[4917]: I1212 00:11:53.004199 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" podUID="22f82d99-f615-429e-a390-0df242a9dc1e" containerName="controller-manager" containerID="cri-o://0a53b8523767415f9d703a5623f7d0464d0a30530736498ec770d6739a8910d2" gracePeriod=30 Dec 12 00:11:53 crc kubenswrapper[4917]: I1212 00:11:53.600488 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" event={"ID":"cf403a35-cd42-4a62-84da-e94b42c457d1","Type":"ContainerStarted","Data":"15e4627a720346e45d1bb823437d4e0b8cb3af230b60150c3149e508b632ab55"} Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.608377 4917 generic.go:334] "Generic (PLEG): container finished" podID="22f82d99-f615-429e-a390-0df242a9dc1e" containerID="0a53b8523767415f9d703a5623f7d0464d0a30530736498ec770d6739a8910d2" exitCode=0 Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.608629 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" event={"ID":"22f82d99-f615-429e-a390-0df242a9dc1e","Type":"ContainerDied","Data":"0a53b8523767415f9d703a5623f7d0464d0a30530736498ec770d6739a8910d2"} Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.609213 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" event={"ID":"22f82d99-f615-429e-a390-0df242a9dc1e","Type":"ContainerDied","Data":"8de7631bdeff89d5900b987817fcf7230f1f424ec3d25ca586c7006ed0b28d5c"} Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.609240 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8de7631bdeff89d5900b987817fcf7230f1f424ec3d25ca586c7006ed0b28d5c" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.609499 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.615355 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.631395 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" podStartSLOduration=9.631380045 podStartE2EDuration="9.631380045s" podCreationTimestamp="2025-12-12 00:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:11:54.627567302 +0000 UTC m=+349.405368125" watchObservedRunningTime="2025-12-12 00:11:54.631380045 +0000 UTC m=+349.409180858" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.654454 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d6454f4c9-xzc72"] Dec 12 00:11:54 crc kubenswrapper[4917]: E1212 00:11:54.654802 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22f82d99-f615-429e-a390-0df242a9dc1e" containerName="controller-manager" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.654818 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="22f82d99-f615-429e-a390-0df242a9dc1e" containerName="controller-manager" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.654937 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="22f82d99-f615-429e-a390-0df242a9dc1e" containerName="controller-manager" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.656537 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.662304 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d6454f4c9-xzc72"] Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.719753 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-config\") pod \"22f82d99-f615-429e-a390-0df242a9dc1e\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.719839 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-client-ca\") pod \"22f82d99-f615-429e-a390-0df242a9dc1e\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.719904 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22f82d99-f615-429e-a390-0df242a9dc1e-serving-cert\") pod \"22f82d99-f615-429e-a390-0df242a9dc1e\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.720031 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwg46\" (UniqueName: \"kubernetes.io/projected/22f82d99-f615-429e-a390-0df242a9dc1e-kube-api-access-kwg46\") pod \"22f82d99-f615-429e-a390-0df242a9dc1e\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.720066 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-proxy-ca-bundles\") pod \"22f82d99-f615-429e-a390-0df242a9dc1e\" (UID: \"22f82d99-f615-429e-a390-0df242a9dc1e\") " Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.721998 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-client-ca" (OuterVolumeSpecName: "client-ca") pod "22f82d99-f615-429e-a390-0df242a9dc1e" (UID: "22f82d99-f615-429e-a390-0df242a9dc1e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.722051 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-config" (OuterVolumeSpecName: "config") pod "22f82d99-f615-429e-a390-0df242a9dc1e" (UID: "22f82d99-f615-429e-a390-0df242a9dc1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.723535 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "22f82d99-f615-429e-a390-0df242a9dc1e" (UID: "22f82d99-f615-429e-a390-0df242a9dc1e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.730577 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22f82d99-f615-429e-a390-0df242a9dc1e-kube-api-access-kwg46" (OuterVolumeSpecName: "kube-api-access-kwg46") pod "22f82d99-f615-429e-a390-0df242a9dc1e" (UID: "22f82d99-f615-429e-a390-0df242a9dc1e"). InnerVolumeSpecName "kube-api-access-kwg46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.730639 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22f82d99-f615-429e-a390-0df242a9dc1e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "22f82d99-f615-429e-a390-0df242a9dc1e" (UID: "22f82d99-f615-429e-a390-0df242a9dc1e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.822003 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef07310-5c97-491c-891d-e7de6a0c9597-config\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.822078 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7b6w\" (UniqueName: \"kubernetes.io/projected/fef07310-5c97-491c-891d-e7de6a0c9597-kube-api-access-x7b6w\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.822128 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fef07310-5c97-491c-891d-e7de6a0c9597-proxy-ca-bundles\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.822184 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fef07310-5c97-491c-891d-e7de6a0c9597-client-ca\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.822213 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef07310-5c97-491c-891d-e7de6a0c9597-serving-cert\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.822274 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwg46\" (UniqueName: \"kubernetes.io/projected/22f82d99-f615-429e-a390-0df242a9dc1e-kube-api-access-kwg46\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.822289 4917 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.822303 4917 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.822315 4917 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22f82d99-f615-429e-a390-0df242a9dc1e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.822326 4917 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22f82d99-f615-429e-a390-0df242a9dc1e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.923741 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fef07310-5c97-491c-891d-e7de6a0c9597-client-ca\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.923931 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef07310-5c97-491c-891d-e7de6a0c9597-serving-cert\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.923977 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef07310-5c97-491c-891d-e7de6a0c9597-config\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.924016 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7b6w\" (UniqueName: \"kubernetes.io/projected/fef07310-5c97-491c-891d-e7de6a0c9597-kube-api-access-x7b6w\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.924050 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fef07310-5c97-491c-891d-e7de6a0c9597-proxy-ca-bundles\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.925098 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fef07310-5c97-491c-891d-e7de6a0c9597-client-ca\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.926129 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef07310-5c97-491c-891d-e7de6a0c9597-config\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.928570 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fef07310-5c97-491c-891d-e7de6a0c9597-proxy-ca-bundles\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.932159 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef07310-5c97-491c-891d-e7de6a0c9597-serving-cert\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.947694 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7b6w\" (UniqueName: \"kubernetes.io/projected/fef07310-5c97-491c-891d-e7de6a0c9597-kube-api-access-x7b6w\") pod \"controller-manager-7d6454f4c9-xzc72\" (UID: \"fef07310-5c97-491c-891d-e7de6a0c9597\") " pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:54 crc kubenswrapper[4917]: I1212 00:11:54.992437 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:55 crc kubenswrapper[4917]: I1212 00:11:55.198009 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d6454f4c9-xzc72"] Dec 12 00:11:55 crc kubenswrapper[4917]: I1212 00:11:55.614414 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b9785f75-5r7b2" Dec 12 00:11:55 crc kubenswrapper[4917]: I1212 00:11:55.620498 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" event={"ID":"fef07310-5c97-491c-891d-e7de6a0c9597","Type":"ContainerStarted","Data":"986e26e9f77637d941327bd0db0e7c6b03bf64fc99d738e2fa58fe335465c306"} Dec 12 00:11:55 crc kubenswrapper[4917]: I1212 00:11:55.650929 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85b9785f75-5r7b2"] Dec 12 00:11:55 crc kubenswrapper[4917]: I1212 00:11:55.654322 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-85b9785f75-5r7b2"] Dec 12 00:11:57 crc kubenswrapper[4917]: I1212 00:11:57.610326 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22f82d99-f615-429e-a390-0df242a9dc1e" path="/var/lib/kubelet/pods/22f82d99-f615-429e-a390-0df242a9dc1e/volumes" Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.639110 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" event={"ID":"fef07310-5c97-491c-891d-e7de6a0c9597","Type":"ContainerStarted","Data":"468290d09ce8fe89d1e2ee800c5279e0dec3fcddfd32dad0560ecc01a7ebca0b"} Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.639980 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.646979 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.663166 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" podStartSLOduration=5.6631482779999995 podStartE2EDuration="5.663148278s" podCreationTimestamp="2025-12-12 00:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:11:58.656574152 +0000 UTC m=+353.434374985" watchObservedRunningTime="2025-12-12 00:11:58.663148278 +0000 UTC m=+353.440949111" Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.772383 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5t8z"] Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.772620 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f5t8z" podUID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" containerName="registry-server" containerID="cri-o://44caefb1808c222da7b98b41b9d70ede0469cdeb28bb8185d2ff00f170c3ff7f" gracePeriod=30 Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.787972 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r7vjh"] Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.795516 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r7vjh" podUID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" containerName="registry-server" containerID="cri-o://93c6666c53cf15d90e8cd8575c773429791da5ee2623d357979e39bae5877779" gracePeriod=30 Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.802796 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8st8c"] Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.803093 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" podUID="24e2c7bd-c682-49e7-942c-eb8afe865602" containerName="marketplace-operator" containerID="cri-o://1d9a945b1a23830a725b1a6607875516e1dd5e297fb2cf0ffe345ee4645c5a36" gracePeriod=30 Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.815840 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7ml9"] Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.816159 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k7ml9" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerName="registry-server" containerID="cri-o://c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c" gracePeriod=30 Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.834107 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-882r8"] Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.835315 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bnjdr"] Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.835705 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bnjdr" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerName="registry-server" containerID="cri-o://0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1" gracePeriod=30 Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.835950 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.862432 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-882r8"] Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.986449 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db163491-1bad-4b12-b00c-9f6b83a1b52f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-882r8\" (UID: \"db163491-1bad-4b12-b00c-9f6b83a1b52f\") " pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.986867 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db163491-1bad-4b12-b00c-9f6b83a1b52f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-882r8\" (UID: \"db163491-1bad-4b12-b00c-9f6b83a1b52f\") " pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:11:58 crc kubenswrapper[4917]: I1212 00:11:58.986906 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjptd\" (UniqueName: \"kubernetes.io/projected/db163491-1bad-4b12-b00c-9f6b83a1b52f-kube-api-access-vjptd\") pod \"marketplace-operator-79b997595-882r8\" (UID: \"db163491-1bad-4b12-b00c-9f6b83a1b52f\") " pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.088862 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db163491-1bad-4b12-b00c-9f6b83a1b52f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-882r8\" (UID: \"db163491-1bad-4b12-b00c-9f6b83a1b52f\") " pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.088945 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db163491-1bad-4b12-b00c-9f6b83a1b52f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-882r8\" (UID: \"db163491-1bad-4b12-b00c-9f6b83a1b52f\") " pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.088983 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjptd\" (UniqueName: \"kubernetes.io/projected/db163491-1bad-4b12-b00c-9f6b83a1b52f-kube-api-access-vjptd\") pod \"marketplace-operator-79b997595-882r8\" (UID: \"db163491-1bad-4b12-b00c-9f6b83a1b52f\") " pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.097155 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db163491-1bad-4b12-b00c-9f6b83a1b52f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-882r8\" (UID: \"db163491-1bad-4b12-b00c-9f6b83a1b52f\") " pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.098582 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db163491-1bad-4b12-b00c-9f6b83a1b52f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-882r8\" (UID: \"db163491-1bad-4b12-b00c-9f6b83a1b52f\") " pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.110173 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjptd\" (UniqueName: \"kubernetes.io/projected/db163491-1bad-4b12-b00c-9f6b83a1b52f-kube-api-access-vjptd\") pod \"marketplace-operator-79b997595-882r8\" (UID: \"db163491-1bad-4b12-b00c-9f6b83a1b52f\") " pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.161285 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.278301 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.392367 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-catalog-content\") pod \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.392575 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcc29\" (UniqueName: \"kubernetes.io/projected/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-kube-api-access-mcc29\") pod \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.392615 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-utilities\") pod \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\" (UID: \"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2\") " Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.393510 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-utilities" (OuterVolumeSpecName: "utilities") pod "f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" (UID: "f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.402077 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-kube-api-access-mcc29" (OuterVolumeSpecName: "kube-api-access-mcc29") pod "f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" (UID: "f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2"). InnerVolumeSpecName "kube-api-access-mcc29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.495150 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcc29\" (UniqueName: \"kubernetes.io/projected/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-kube-api-access-mcc29\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.495187 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.518240 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" (UID: "f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.588395 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-882r8"] Dec 12 00:11:59 crc kubenswrapper[4917]: W1212 00:11:59.600806 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb163491_1bad_4b12_b00c_9f6b83a1b52f.slice/crio-9cd27373d12e3251b87b6636fdc1205c4cc516178287f1e3a996eb49a74e7df2 WatchSource:0}: Error finding container 9cd27373d12e3251b87b6636fdc1205c4cc516178287f1e3a996eb49a74e7df2: Status 404 returned error can't find the container with id 9cd27373d12e3251b87b6636fdc1205c4cc516178287f1e3a996eb49a74e7df2 Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.603120 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.648288 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-882r8" event={"ID":"db163491-1bad-4b12-b00c-9f6b83a1b52f","Type":"ContainerStarted","Data":"9cd27373d12e3251b87b6636fdc1205c4cc516178287f1e3a996eb49a74e7df2"} Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.650371 4917 generic.go:334] "Generic (PLEG): container finished" podID="24e2c7bd-c682-49e7-942c-eb8afe865602" containerID="1d9a945b1a23830a725b1a6607875516e1dd5e297fb2cf0ffe345ee4645c5a36" exitCode=0 Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.650432 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" event={"ID":"24e2c7bd-c682-49e7-942c-eb8afe865602","Type":"ContainerDied","Data":"1d9a945b1a23830a725b1a6607875516e1dd5e297fb2cf0ffe345ee4645c5a36"} Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.653459 4917 generic.go:334] "Generic (PLEG): container finished" podID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" containerID="93c6666c53cf15d90e8cd8575c773429791da5ee2623d357979e39bae5877779" exitCode=0 Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.653691 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7vjh" event={"ID":"040517b1-b5e4-46e0-90c9-4fb4a7e5726f","Type":"ContainerDied","Data":"93c6666c53cf15d90e8cd8575c773429791da5ee2623d357979e39bae5877779"} Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.656372 4917 generic.go:334] "Generic (PLEG): container finished" podID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerID="0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1" exitCode=0 Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.656414 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnjdr" event={"ID":"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2","Type":"ContainerDied","Data":"0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1"} Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.656450 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnjdr" event={"ID":"f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2","Type":"ContainerDied","Data":"5f30387d2805cca96b34fc30b974d148a04794fc94eb14dee834560ebbbffc27"} Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.656469 4917 scope.go:117] "RemoveContainer" containerID="0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.656551 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnjdr" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.662486 4917 generic.go:334] "Generic (PLEG): container finished" podID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" containerID="44caefb1808c222da7b98b41b9d70ede0469cdeb28bb8185d2ff00f170c3ff7f" exitCode=0 Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.662578 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5t8z" event={"ID":"f59fe677-4717-4fd4-8491-6f9d68ab5a54","Type":"ContainerDied","Data":"44caefb1808c222da7b98b41b9d70ede0469cdeb28bb8185d2ff00f170c3ff7f"} Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.668890 4917 generic.go:334] "Generic (PLEG): container finished" podID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerID="c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c" exitCode=0 Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.670772 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7ml9" event={"ID":"c1fe8325-d2d0-4418-8c57-cdb509c32ce6","Type":"ContainerDied","Data":"c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c"} Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.678551 4917 scope.go:117] "RemoveContainer" containerID="6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.680950 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bnjdr"] Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.685829 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bnjdr"] Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.705178 4917 scope.go:117] "RemoveContainer" containerID="3bb836eec10431da814c80124ef8d882b8c0d8f472902af25b7610b912f2ae16" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.719175 4917 scope.go:117] "RemoveContainer" containerID="0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1" Dec 12 00:11:59 crc kubenswrapper[4917]: E1212 00:11:59.719501 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1\": container with ID starting with 0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1 not found: ID does not exist" containerID="0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.719531 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1"} err="failed to get container status \"0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1\": rpc error: code = NotFound desc = could not find container \"0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1\": container with ID starting with 0ddc5a063b11dc8a36ce19e56c7a66e890bf4e95427de0ba793d56d450769ad1 not found: ID does not exist" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.719554 4917 scope.go:117] "RemoveContainer" containerID="6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366" Dec 12 00:11:59 crc kubenswrapper[4917]: E1212 00:11:59.719977 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366\": container with ID starting with 6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366 not found: ID does not exist" containerID="6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.720002 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366"} err="failed to get container status \"6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366\": rpc error: code = NotFound desc = could not find container \"6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366\": container with ID starting with 6878b25b03ca742416671e57ae547ed4c811207097275ae73a67a4d2903b7366 not found: ID does not exist" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.720018 4917 scope.go:117] "RemoveContainer" containerID="3bb836eec10431da814c80124ef8d882b8c0d8f472902af25b7610b912f2ae16" Dec 12 00:11:59 crc kubenswrapper[4917]: E1212 00:11:59.720283 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb836eec10431da814c80124ef8d882b8c0d8f472902af25b7610b912f2ae16\": container with ID starting with 3bb836eec10431da814c80124ef8d882b8c0d8f472902af25b7610b912f2ae16 not found: ID does not exist" containerID="3bb836eec10431da814c80124ef8d882b8c0d8f472902af25b7610b912f2ae16" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.720307 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb836eec10431da814c80124ef8d882b8c0d8f472902af25b7610b912f2ae16"} err="failed to get container status \"3bb836eec10431da814c80124ef8d882b8c0d8f472902af25b7610b912f2ae16\": rpc error: code = NotFound desc = could not find container \"3bb836eec10431da814c80124ef8d882b8c0d8f472902af25b7610b912f2ae16\": container with ID starting with 3bb836eec10431da814c80124ef8d882b8c0d8f472902af25b7610b912f2ae16 not found: ID does not exist" Dec 12 00:11:59 crc kubenswrapper[4917]: E1212 00:11:59.737243 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c is running failed: container process not found" containerID="c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:11:59 crc kubenswrapper[4917]: E1212 00:11:59.738349 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c is running failed: container process not found" containerID="c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:11:59 crc kubenswrapper[4917]: E1212 00:11:59.738832 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c is running failed: container process not found" containerID="c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:11:59 crc kubenswrapper[4917]: E1212 00:11:59.738862 4917 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-k7ml9" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerName="registry-server" Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.869481 4917 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8st8c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Dec 12 00:11:59 crc kubenswrapper[4917]: I1212 00:11:59.870550 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" podUID="24e2c7bd-c682-49e7-942c-eb8afe865602" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.128491 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.211453 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-catalog-content\") pod \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.211507 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-utilities\") pod \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.211552 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7qzp\" (UniqueName: \"kubernetes.io/projected/f59fe677-4717-4fd4-8491-6f9d68ab5a54-kube-api-access-j7qzp\") pod \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\" (UID: \"f59fe677-4717-4fd4-8491-6f9d68ab5a54\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.213059 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-utilities" (OuterVolumeSpecName: "utilities") pod "f59fe677-4717-4fd4-8491-6f9d68ab5a54" (UID: "f59fe677-4717-4fd4-8491-6f9d68ab5a54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.219328 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f59fe677-4717-4fd4-8491-6f9d68ab5a54-kube-api-access-j7qzp" (OuterVolumeSpecName: "kube-api-access-j7qzp") pod "f59fe677-4717-4fd4-8491-6f9d68ab5a54" (UID: "f59fe677-4717-4fd4-8491-6f9d68ab5a54"). InnerVolumeSpecName "kube-api-access-j7qzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.263094 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.268500 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.272237 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.277600 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f59fe677-4717-4fd4-8491-6f9d68ab5a54" (UID: "f59fe677-4717-4fd4-8491-6f9d68ab5a54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.314684 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7qzp\" (UniqueName: \"kubernetes.io/projected/f59fe677-4717-4fd4-8491-6f9d68ab5a54-kube-api-access-j7qzp\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.314718 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.314728 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f59fe677-4717-4fd4-8491-6f9d68ab5a54-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.415794 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqnn8\" (UniqueName: \"kubernetes.io/projected/24e2c7bd-c682-49e7-942c-eb8afe865602-kube-api-access-sqnn8\") pod \"24e2c7bd-c682-49e7-942c-eb8afe865602\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.415961 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-catalog-content\") pod \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.416005 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7z7f\" (UniqueName: \"kubernetes.io/projected/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-kube-api-access-t7z7f\") pod \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.416041 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-catalog-content\") pod \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.416063 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-trusted-ca\") pod \"24e2c7bd-c682-49e7-942c-eb8afe865602\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.416098 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-operator-metrics\") pod \"24e2c7bd-c682-49e7-942c-eb8afe865602\" (UID: \"24e2c7bd-c682-49e7-942c-eb8afe865602\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.416138 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-utilities\") pod \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\" (UID: \"c1fe8325-d2d0-4418-8c57-cdb509c32ce6\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.416188 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk68v\" (UniqueName: \"kubernetes.io/projected/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-kube-api-access-rk68v\") pod \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.416255 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-utilities\") pod \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\" (UID: \"040517b1-b5e4-46e0-90c9-4fb4a7e5726f\") " Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.417357 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "24e2c7bd-c682-49e7-942c-eb8afe865602" (UID: "24e2c7bd-c682-49e7-942c-eb8afe865602"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.417538 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-utilities" (OuterVolumeSpecName: "utilities") pod "040517b1-b5e4-46e0-90c9-4fb4a7e5726f" (UID: "040517b1-b5e4-46e0-90c9-4fb4a7e5726f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.417954 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-utilities" (OuterVolumeSpecName: "utilities") pod "c1fe8325-d2d0-4418-8c57-cdb509c32ce6" (UID: "c1fe8325-d2d0-4418-8c57-cdb509c32ce6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.420851 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-kube-api-access-t7z7f" (OuterVolumeSpecName: "kube-api-access-t7z7f") pod "c1fe8325-d2d0-4418-8c57-cdb509c32ce6" (UID: "c1fe8325-d2d0-4418-8c57-cdb509c32ce6"). InnerVolumeSpecName "kube-api-access-t7z7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.420900 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e2c7bd-c682-49e7-942c-eb8afe865602-kube-api-access-sqnn8" (OuterVolumeSpecName: "kube-api-access-sqnn8") pod "24e2c7bd-c682-49e7-942c-eb8afe865602" (UID: "24e2c7bd-c682-49e7-942c-eb8afe865602"). InnerVolumeSpecName "kube-api-access-sqnn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.421744 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "24e2c7bd-c682-49e7-942c-eb8afe865602" (UID: "24e2c7bd-c682-49e7-942c-eb8afe865602"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.423348 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-kube-api-access-rk68v" (OuterVolumeSpecName: "kube-api-access-rk68v") pod "040517b1-b5e4-46e0-90c9-4fb4a7e5726f" (UID: "040517b1-b5e4-46e0-90c9-4fb4a7e5726f"). InnerVolumeSpecName "kube-api-access-rk68v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.446201 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1fe8325-d2d0-4418-8c57-cdb509c32ce6" (UID: "c1fe8325-d2d0-4418-8c57-cdb509c32ce6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.482472 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "040517b1-b5e4-46e0-90c9-4fb4a7e5726f" (UID: "040517b1-b5e4-46e0-90c9-4fb4a7e5726f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.518035 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.518085 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7z7f\" (UniqueName: \"kubernetes.io/projected/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-kube-api-access-t7z7f\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.518101 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.518114 4917 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.518126 4917 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24e2c7bd-c682-49e7-942c-eb8afe865602-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.518137 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1fe8325-d2d0-4418-8c57-cdb509c32ce6-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.518149 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk68v\" (UniqueName: \"kubernetes.io/projected/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-kube-api-access-rk68v\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.518159 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/040517b1-b5e4-46e0-90c9-4fb4a7e5726f-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.518172 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqnn8\" (UniqueName: \"kubernetes.io/projected/24e2c7bd-c682-49e7-942c-eb8afe865602-kube-api-access-sqnn8\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.676912 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7ml9" event={"ID":"c1fe8325-d2d0-4418-8c57-cdb509c32ce6","Type":"ContainerDied","Data":"2c082597ef4c3f8654fc8518a66c197b57345e1f9dd0f6548d019b04c0e065fb"} Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.676976 4917 scope.go:117] "RemoveContainer" containerID="c3f7c3aff42ae9ab35a0b73c2fc7182d0def310a8c741e42296cbfb81de2fd9c" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.676926 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7ml9" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.680409 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-882r8" event={"ID":"db163491-1bad-4b12-b00c-9f6b83a1b52f","Type":"ContainerStarted","Data":"48b7ad1c4c85dc0fad5ae42c84c6a519d66ddaf949c92987caa8cd521bf46293"} Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.681709 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.687115 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-882r8" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.691253 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" event={"ID":"24e2c7bd-c682-49e7-942c-eb8afe865602","Type":"ContainerDied","Data":"1f41a174d66efb3613adb30f67fce5b7df8bb03a8cf091862d23d285c59636ad"} Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.691290 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8st8c" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.704363 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7vjh" event={"ID":"040517b1-b5e4-46e0-90c9-4fb4a7e5726f","Type":"ContainerDied","Data":"dbb1e5692a9d30d975bd93b10e043b0ad54b7962426114514ad18898bfe0a7a0"} Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.704989 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r7vjh" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.707342 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-882r8" podStartSLOduration=2.707305084 podStartE2EDuration="2.707305084s" podCreationTimestamp="2025-12-12 00:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:12:00.702060313 +0000 UTC m=+355.479861136" watchObservedRunningTime="2025-12-12 00:12:00.707305084 +0000 UTC m=+355.485105897" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.709106 4917 scope.go:117] "RemoveContainer" containerID="3e32486a4d20773519aa65e5d69f930bbd121afd29e929eb2dbdd95b3e1b2d74" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.719431 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5t8z" event={"ID":"f59fe677-4717-4fd4-8491-6f9d68ab5a54","Type":"ContainerDied","Data":"82624f4c0391c209bf0cae0709dadf658400fceeb60031aa4e7315bef37b0adb"} Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.726267 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5t8z" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.743522 4917 scope.go:117] "RemoveContainer" containerID="923287da3b62bb51eeb2811b37e0287f74fcdbff8e12d0c13405168cae19700d" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.772051 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8st8c"] Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.777839 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8st8c"] Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.781244 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r7vjh"] Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.782009 4917 scope.go:117] "RemoveContainer" containerID="1d9a945b1a23830a725b1a6607875516e1dd5e297fb2cf0ffe345ee4645c5a36" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.788426 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r7vjh"] Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.797355 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7ml9"] Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.814770 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7ml9"] Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.818135 4917 scope.go:117] "RemoveContainer" containerID="93c6666c53cf15d90e8cd8575c773429791da5ee2623d357979e39bae5877779" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.821054 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5t8z"] Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.826323 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f5t8z"] Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.836709 4917 scope.go:117] "RemoveContainer" containerID="ec9d77a8e9fc96f752104e1e456711645f2e81b53e29f9cc911f836ed8fab25e" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.850728 4917 scope.go:117] "RemoveContainer" containerID="ac9b679c8dd00a9e297fa3d4300f6ec851ab7f7f648b0f41f1467b4a3c238661" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.867437 4917 scope.go:117] "RemoveContainer" containerID="44caefb1808c222da7b98b41b9d70ede0469cdeb28bb8185d2ff00f170c3ff7f" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.879776 4917 scope.go:117] "RemoveContainer" containerID="32c054ca5f893d2f90e919bfcd81d8708eefd23eb3759db12e0e67407e512a3d" Dec 12 00:12:00 crc kubenswrapper[4917]: I1212 00:12:00.916673 4917 scope.go:117] "RemoveContainer" containerID="8c224837e69673c38960cc4e064afa1c4d943f19712dced0e4850734ebf6ef2d" Dec 12 00:12:01 crc kubenswrapper[4917]: I1212 00:12:01.614799 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" path="/var/lib/kubelet/pods/040517b1-b5e4-46e0-90c9-4fb4a7e5726f/volumes" Dec 12 00:12:01 crc kubenswrapper[4917]: I1212 00:12:01.617570 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e2c7bd-c682-49e7-942c-eb8afe865602" path="/var/lib/kubelet/pods/24e2c7bd-c682-49e7-942c-eb8afe865602/volumes" Dec 12 00:12:01 crc kubenswrapper[4917]: I1212 00:12:01.618292 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" path="/var/lib/kubelet/pods/c1fe8325-d2d0-4418-8c57-cdb509c32ce6/volumes" Dec 12 00:12:01 crc kubenswrapper[4917]: I1212 00:12:01.619788 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" path="/var/lib/kubelet/pods/f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2/volumes" Dec 12 00:12:01 crc kubenswrapper[4917]: I1212 00:12:01.620513 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" path="/var/lib/kubelet/pods/f59fe677-4717-4fd4-8491-6f9d68ab5a54/volumes" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.993888 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p79gn"] Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994626 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994667 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994677 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994683 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994696 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e2c7bd-c682-49e7-942c-eb8afe865602" containerName="marketplace-operator" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994702 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e2c7bd-c682-49e7-942c-eb8afe865602" containerName="marketplace-operator" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994709 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994719 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994729 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" containerName="extract-content" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994736 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" containerName="extract-content" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994746 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerName="extract-content" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994752 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerName="extract-content" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994764 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994771 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994779 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" containerName="extract-utilities" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994788 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" containerName="extract-utilities" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994798 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerName="extract-utilities" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994804 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerName="extract-utilities" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994811 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerName="extract-content" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994816 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerName="extract-content" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994822 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" containerName="extract-content" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994829 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" containerName="extract-content" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994839 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" containerName="extract-utilities" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994845 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" containerName="extract-utilities" Dec 12 00:12:02 crc kubenswrapper[4917]: E1212 00:12:02.994854 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerName="extract-utilities" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994860 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerName="extract-utilities" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.994991 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e2c7bd-c682-49e7-942c-eb8afe865602" containerName="marketplace-operator" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.995004 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1fe8325-d2d0-4418-8c57-cdb509c32ce6" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.995013 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="040517b1-b5e4-46e0-90c9-4fb4a7e5726f" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.995024 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b6c52c-c1ca-4d5b-8d24-2a464f7501c2" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.995032 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f59fe677-4717-4fd4-8491-6f9d68ab5a54" containerName="registry-server" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.996032 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:02 crc kubenswrapper[4917]: I1212 00:12:02.998993 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.027551 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p79gn"] Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.057079 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr4ck\" (UniqueName: \"kubernetes.io/projected/67ac8eda-571d-4185-b4ee-e6eff7fab229-kube-api-access-fr4ck\") pod \"community-operators-p79gn\" (UID: \"67ac8eda-571d-4185-b4ee-e6eff7fab229\") " pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.057157 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ac8eda-571d-4185-b4ee-e6eff7fab229-utilities\") pod \"community-operators-p79gn\" (UID: \"67ac8eda-571d-4185-b4ee-e6eff7fab229\") " pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.057501 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ac8eda-571d-4185-b4ee-e6eff7fab229-catalog-content\") pod \"community-operators-p79gn\" (UID: \"67ac8eda-571d-4185-b4ee-e6eff7fab229\") " pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.158951 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ac8eda-571d-4185-b4ee-e6eff7fab229-catalog-content\") pod \"community-operators-p79gn\" (UID: \"67ac8eda-571d-4185-b4ee-e6eff7fab229\") " pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.159098 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr4ck\" (UniqueName: \"kubernetes.io/projected/67ac8eda-571d-4185-b4ee-e6eff7fab229-kube-api-access-fr4ck\") pod \"community-operators-p79gn\" (UID: \"67ac8eda-571d-4185-b4ee-e6eff7fab229\") " pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.159161 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ac8eda-571d-4185-b4ee-e6eff7fab229-utilities\") pod \"community-operators-p79gn\" (UID: \"67ac8eda-571d-4185-b4ee-e6eff7fab229\") " pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.159683 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ac8eda-571d-4185-b4ee-e6eff7fab229-catalog-content\") pod \"community-operators-p79gn\" (UID: \"67ac8eda-571d-4185-b4ee-e6eff7fab229\") " pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.159836 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ac8eda-571d-4185-b4ee-e6eff7fab229-utilities\") pod \"community-operators-p79gn\" (UID: \"67ac8eda-571d-4185-b4ee-e6eff7fab229\") " pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.196990 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr4ck\" (UniqueName: \"kubernetes.io/projected/67ac8eda-571d-4185-b4ee-e6eff7fab229-kube-api-access-fr4ck\") pod \"community-operators-p79gn\" (UID: \"67ac8eda-571d-4185-b4ee-e6eff7fab229\") " pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.206951 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ccdts"] Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.208124 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.211200 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.213108 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccdts"] Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.341855 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.361232 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phr2l\" (UniqueName: \"kubernetes.io/projected/4ba83efd-fe91-4187-ba9a-ee464371ba30-kube-api-access-phr2l\") pod \"redhat-operators-ccdts\" (UID: \"4ba83efd-fe91-4187-ba9a-ee464371ba30\") " pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.361335 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba83efd-fe91-4187-ba9a-ee464371ba30-catalog-content\") pod \"redhat-operators-ccdts\" (UID: \"4ba83efd-fe91-4187-ba9a-ee464371ba30\") " pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.361372 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba83efd-fe91-4187-ba9a-ee464371ba30-utilities\") pod \"redhat-operators-ccdts\" (UID: \"4ba83efd-fe91-4187-ba9a-ee464371ba30\") " pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.462814 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phr2l\" (UniqueName: \"kubernetes.io/projected/4ba83efd-fe91-4187-ba9a-ee464371ba30-kube-api-access-phr2l\") pod \"redhat-operators-ccdts\" (UID: \"4ba83efd-fe91-4187-ba9a-ee464371ba30\") " pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.463295 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba83efd-fe91-4187-ba9a-ee464371ba30-catalog-content\") pod \"redhat-operators-ccdts\" (UID: \"4ba83efd-fe91-4187-ba9a-ee464371ba30\") " pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.463356 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba83efd-fe91-4187-ba9a-ee464371ba30-utilities\") pod \"redhat-operators-ccdts\" (UID: \"4ba83efd-fe91-4187-ba9a-ee464371ba30\") " pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.464096 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba83efd-fe91-4187-ba9a-ee464371ba30-catalog-content\") pod \"redhat-operators-ccdts\" (UID: \"4ba83efd-fe91-4187-ba9a-ee464371ba30\") " pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.464145 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba83efd-fe91-4187-ba9a-ee464371ba30-utilities\") pod \"redhat-operators-ccdts\" (UID: \"4ba83efd-fe91-4187-ba9a-ee464371ba30\") " pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.487477 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phr2l\" (UniqueName: \"kubernetes.io/projected/4ba83efd-fe91-4187-ba9a-ee464371ba30-kube-api-access-phr2l\") pod \"redhat-operators-ccdts\" (UID: \"4ba83efd-fe91-4187-ba9a-ee464371ba30\") " pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.530932 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.878177 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p79gn"] Dec 12 00:12:03 crc kubenswrapper[4917]: I1212 00:12:03.992360 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccdts"] Dec 12 00:12:04 crc kubenswrapper[4917]: W1212 00:12:04.002240 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ba83efd_fe91_4187_ba9a_ee464371ba30.slice/crio-cd13aab5707e7c018dc3b124a9e974ab54bfdc35e953a48c83e4683bb32753a1 WatchSource:0}: Error finding container cd13aab5707e7c018dc3b124a9e974ab54bfdc35e953a48c83e4683bb32753a1: Status 404 returned error can't find the container with id cd13aab5707e7c018dc3b124a9e974ab54bfdc35e953a48c83e4683bb32753a1 Dec 12 00:12:04 crc kubenswrapper[4917]: I1212 00:12:04.752535 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79gn" event={"ID":"67ac8eda-571d-4185-b4ee-e6eff7fab229","Type":"ContainerStarted","Data":"8256be0b54fb215fd5e0e9906d4ca47bd19eed5ae5a98a356f8a5e6a0ac820dc"} Dec 12 00:12:04 crc kubenswrapper[4917]: I1212 00:12:04.753813 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccdts" event={"ID":"4ba83efd-fe91-4187-ba9a-ee464371ba30","Type":"ContainerStarted","Data":"cd13aab5707e7c018dc3b124a9e974ab54bfdc35e953a48c83e4683bb32753a1"} Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.396723 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tqshb"] Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.398295 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.403388 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.409520 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqshb"] Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.500297 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs2bc\" (UniqueName: \"kubernetes.io/projected/5ace861f-ace6-42d1-a717-37da333aed72-kube-api-access-xs2bc\") pod \"redhat-marketplace-tqshb\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.500352 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-catalog-content\") pod \"redhat-marketplace-tqshb\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.500385 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-utilities\") pod \"redhat-marketplace-tqshb\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.591831 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hcbwr"] Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.593160 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.599497 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.601339 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-catalog-content\") pod \"redhat-marketplace-tqshb\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.601393 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-utilities\") pod \"redhat-marketplace-tqshb\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.601452 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs2bc\" (UniqueName: \"kubernetes.io/projected/5ace861f-ace6-42d1-a717-37da333aed72-kube-api-access-xs2bc\") pod \"redhat-marketplace-tqshb\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.601906 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-catalog-content\") pod \"redhat-marketplace-tqshb\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.601962 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-utilities\") pod \"redhat-marketplace-tqshb\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.614872 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hcbwr"] Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.635632 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs2bc\" (UniqueName: \"kubernetes.io/projected/5ace861f-ace6-42d1-a717-37da333aed72-kube-api-access-xs2bc\") pod \"redhat-marketplace-tqshb\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.702703 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x7h6\" (UniqueName: \"kubernetes.io/projected/f4e49af0-4a50-42ce-af81-a397919c9df2-kube-api-access-8x7h6\") pod \"certified-operators-hcbwr\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.702794 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-catalog-content\") pod \"certified-operators-hcbwr\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.702864 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-utilities\") pod \"certified-operators-hcbwr\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.728259 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.736518 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.804143 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x7h6\" (UniqueName: \"kubernetes.io/projected/f4e49af0-4a50-42ce-af81-a397919c9df2-kube-api-access-8x7h6\") pod \"certified-operators-hcbwr\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.804735 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-catalog-content\") pod \"certified-operators-hcbwr\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.804875 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-utilities\") pod \"certified-operators-hcbwr\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.805402 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-catalog-content\") pod \"certified-operators-hcbwr\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.805690 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-utilities\") pod \"certified-operators-hcbwr\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.835550 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x7h6\" (UniqueName: \"kubernetes.io/projected/f4e49af0-4a50-42ce-af81-a397919c9df2-kube-api-access-8x7h6\") pod \"certified-operators-hcbwr\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:05 crc kubenswrapper[4917]: I1212 00:12:05.910942 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.032619 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqshb"] Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.391150 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hcbwr"] Dec 12 00:12:06 crc kubenswrapper[4917]: W1212 00:12:06.450004 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4e49af0_4a50_42ce_af81_a397919c9df2.slice/crio-2ec4a11ad97a4be30c59fe57ffe3f6c1a84095225c60ab75a209b6825ab88ad0 WatchSource:0}: Error finding container 2ec4a11ad97a4be30c59fe57ffe3f6c1a84095225c60ab75a209b6825ab88ad0: Status 404 returned error can't find the container with id 2ec4a11ad97a4be30c59fe57ffe3f6c1a84095225c60ab75a209b6825ab88ad0 Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.780551 4917 generic.go:334] "Generic (PLEG): container finished" podID="67ac8eda-571d-4185-b4ee-e6eff7fab229" containerID="79c385c7db6c5b1e6b0795abf3f028c176e10ee93181d39ba15d88efcbc291f6" exitCode=0 Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.780675 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79gn" event={"ID":"67ac8eda-571d-4185-b4ee-e6eff7fab229","Type":"ContainerDied","Data":"79c385c7db6c5b1e6b0795abf3f028c176e10ee93181d39ba15d88efcbc291f6"} Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.782358 4917 generic.go:334] "Generic (PLEG): container finished" podID="f4e49af0-4a50-42ce-af81-a397919c9df2" containerID="b18553363f574cc9e9969ffff8b34647b0cd485403c39710d6c3257864b4368d" exitCode=0 Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.782534 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbwr" event={"ID":"f4e49af0-4a50-42ce-af81-a397919c9df2","Type":"ContainerDied","Data":"b18553363f574cc9e9969ffff8b34647b0cd485403c39710d6c3257864b4368d"} Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.782587 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbwr" event={"ID":"f4e49af0-4a50-42ce-af81-a397919c9df2","Type":"ContainerStarted","Data":"2ec4a11ad97a4be30c59fe57ffe3f6c1a84095225c60ab75a209b6825ab88ad0"} Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.787166 4917 generic.go:334] "Generic (PLEG): container finished" podID="4ba83efd-fe91-4187-ba9a-ee464371ba30" containerID="950182b227ea7450e0b6a2f3c4ed35f6bf15c93ddf2462bee52f9eb9a13eb4e9" exitCode=0 Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.787437 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccdts" event={"ID":"4ba83efd-fe91-4187-ba9a-ee464371ba30","Type":"ContainerDied","Data":"950182b227ea7450e0b6a2f3c4ed35f6bf15c93ddf2462bee52f9eb9a13eb4e9"} Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.791107 4917 generic.go:334] "Generic (PLEG): container finished" podID="5ace861f-ace6-42d1-a717-37da333aed72" containerID="f1456d29de565e294f87ae20813c072fe61dddbe90eca2a3289c028baa6397bc" exitCode=0 Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.791158 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqshb" event={"ID":"5ace861f-ace6-42d1-a717-37da333aed72","Type":"ContainerDied","Data":"f1456d29de565e294f87ae20813c072fe61dddbe90eca2a3289c028baa6397bc"} Dec 12 00:12:06 crc kubenswrapper[4917]: I1212 00:12:06.791742 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqshb" event={"ID":"5ace861f-ace6-42d1-a717-37da333aed72","Type":"ContainerStarted","Data":"280c8b5e237c01a64c09ebbda7e32789390aa45d394069928f2da47caf0e58d3"} Dec 12 00:12:16 crc kubenswrapper[4917]: I1212 00:12:16.260843 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-x86v4" Dec 12 00:12:16 crc kubenswrapper[4917]: I1212 00:12:16.314032 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-th48g"] Dec 12 00:12:18 crc kubenswrapper[4917]: I1212 00:12:18.867083 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccdts" event={"ID":"4ba83efd-fe91-4187-ba9a-ee464371ba30","Type":"ContainerStarted","Data":"bad133daf0bc9d529d960f9a8055c01e7a30cdd6e147c207cce6ea362dc60495"} Dec 12 00:12:18 crc kubenswrapper[4917]: I1212 00:12:18.869342 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79gn" event={"ID":"67ac8eda-571d-4185-b4ee-e6eff7fab229","Type":"ContainerStarted","Data":"d7ee1b7ce09d54a82c9130104a91e23d2cdbd8e1f655699f38f0dde192a6f757"} Dec 12 00:12:18 crc kubenswrapper[4917]: I1212 00:12:18.872717 4917 generic.go:334] "Generic (PLEG): container finished" podID="5ace861f-ace6-42d1-a717-37da333aed72" containerID="fd8deda900b7f521ae9b5ed95536d8eed6f21d773cf5bdeeca63070a0772f7be" exitCode=0 Dec 12 00:12:18 crc kubenswrapper[4917]: I1212 00:12:18.872769 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqshb" event={"ID":"5ace861f-ace6-42d1-a717-37da333aed72","Type":"ContainerDied","Data":"fd8deda900b7f521ae9b5ed95536d8eed6f21d773cf5bdeeca63070a0772f7be"} Dec 12 00:12:18 crc kubenswrapper[4917]: I1212 00:12:18.875120 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbwr" event={"ID":"f4e49af0-4a50-42ce-af81-a397919c9df2","Type":"ContainerStarted","Data":"3cd36e080ec093f66f1a44d8d6b13cd57d5449e85daeb7cdda2ccb4c67092b1a"} Dec 12 00:12:21 crc kubenswrapper[4917]: I1212 00:12:21.901630 4917 generic.go:334] "Generic (PLEG): container finished" podID="f4e49af0-4a50-42ce-af81-a397919c9df2" containerID="3cd36e080ec093f66f1a44d8d6b13cd57d5449e85daeb7cdda2ccb4c67092b1a" exitCode=0 Dec 12 00:12:21 crc kubenswrapper[4917]: I1212 00:12:21.901723 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbwr" event={"ID":"f4e49af0-4a50-42ce-af81-a397919c9df2","Type":"ContainerDied","Data":"3cd36e080ec093f66f1a44d8d6b13cd57d5449e85daeb7cdda2ccb4c67092b1a"} Dec 12 00:12:21 crc kubenswrapper[4917]: I1212 00:12:21.911859 4917 generic.go:334] "Generic (PLEG): container finished" podID="67ac8eda-571d-4185-b4ee-e6eff7fab229" containerID="d7ee1b7ce09d54a82c9130104a91e23d2cdbd8e1f655699f38f0dde192a6f757" exitCode=0 Dec 12 00:12:21 crc kubenswrapper[4917]: I1212 00:12:21.911926 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79gn" event={"ID":"67ac8eda-571d-4185-b4ee-e6eff7fab229","Type":"ContainerDied","Data":"d7ee1b7ce09d54a82c9130104a91e23d2cdbd8e1f655699f38f0dde192a6f757"} Dec 12 00:12:22 crc kubenswrapper[4917]: I1212 00:12:22.964999 4917 generic.go:334] "Generic (PLEG): container finished" podID="4ba83efd-fe91-4187-ba9a-ee464371ba30" containerID="bad133daf0bc9d529d960f9a8055c01e7a30cdd6e147c207cce6ea362dc60495" exitCode=0 Dec 12 00:12:22 crc kubenswrapper[4917]: I1212 00:12:22.965039 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccdts" event={"ID":"4ba83efd-fe91-4187-ba9a-ee464371ba30","Type":"ContainerDied","Data":"bad133daf0bc9d529d960f9a8055c01e7a30cdd6e147c207cce6ea362dc60495"} Dec 12 00:12:29 crc kubenswrapper[4917]: I1212 00:12:29.640190 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:12:29 crc kubenswrapper[4917]: I1212 00:12:29.640270 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:12:41 crc kubenswrapper[4917]: I1212 00:12:41.361549 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" podUID="ac4662d0-8501-4627-81b8-fdfffff90309" containerName="registry" containerID="cri-o://690ce27f806975eded134aa8cb8dc25d264ef84fde40707d094da7187fbbd0a3" gracePeriod=30 Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.105390 4917 generic.go:334] "Generic (PLEG): container finished" podID="ac4662d0-8501-4627-81b8-fdfffff90309" containerID="690ce27f806975eded134aa8cb8dc25d264ef84fde40707d094da7187fbbd0a3" exitCode=0 Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.106178 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" event={"ID":"ac4662d0-8501-4627-81b8-fdfffff90309","Type":"ContainerDied","Data":"690ce27f806975eded134aa8cb8dc25d264ef84fde40707d094da7187fbbd0a3"} Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.631763 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.803791 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac4662d0-8501-4627-81b8-fdfffff90309-ca-trust-extracted\") pod \"ac4662d0-8501-4627-81b8-fdfffff90309\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.803870 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fkft\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-kube-api-access-8fkft\") pod \"ac4662d0-8501-4627-81b8-fdfffff90309\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.804079 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ac4662d0-8501-4627-81b8-fdfffff90309\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.804202 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-registry-tls\") pod \"ac4662d0-8501-4627-81b8-fdfffff90309\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.804233 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-registry-certificates\") pod \"ac4662d0-8501-4627-81b8-fdfffff90309\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.804266 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-trusted-ca\") pod \"ac4662d0-8501-4627-81b8-fdfffff90309\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.804291 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac4662d0-8501-4627-81b8-fdfffff90309-installation-pull-secrets\") pod \"ac4662d0-8501-4627-81b8-fdfffff90309\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.804332 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-bound-sa-token\") pod \"ac4662d0-8501-4627-81b8-fdfffff90309\" (UID: \"ac4662d0-8501-4627-81b8-fdfffff90309\") " Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.805986 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ac4662d0-8501-4627-81b8-fdfffff90309" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.806742 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ac4662d0-8501-4627-81b8-fdfffff90309" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.811411 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ac4662d0-8501-4627-81b8-fdfffff90309" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.811817 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac4662d0-8501-4627-81b8-fdfffff90309-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ac4662d0-8501-4627-81b8-fdfffff90309" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.812220 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ac4662d0-8501-4627-81b8-fdfffff90309" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.813177 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-kube-api-access-8fkft" (OuterVolumeSpecName: "kube-api-access-8fkft") pod "ac4662d0-8501-4627-81b8-fdfffff90309" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309"). InnerVolumeSpecName "kube-api-access-8fkft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.816526 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ac4662d0-8501-4627-81b8-fdfffff90309" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.828861 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4662d0-8501-4627-81b8-fdfffff90309-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ac4662d0-8501-4627-81b8-fdfffff90309" (UID: "ac4662d0-8501-4627-81b8-fdfffff90309"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.905767 4917 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.906172 4917 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.906302 4917 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac4662d0-8501-4627-81b8-fdfffff90309-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.906372 4917 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac4662d0-8501-4627-81b8-fdfffff90309-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.906438 4917 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.906496 4917 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac4662d0-8501-4627-81b8-fdfffff90309-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:46 crc kubenswrapper[4917]: I1212 00:12:46.906679 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fkft\" (UniqueName: \"kubernetes.io/projected/ac4662d0-8501-4627-81b8-fdfffff90309-kube-api-access-8fkft\") on node \"crc\" DevicePath \"\"" Dec 12 00:12:47 crc kubenswrapper[4917]: I1212 00:12:47.115548 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" event={"ID":"ac4662d0-8501-4627-81b8-fdfffff90309","Type":"ContainerDied","Data":"59e19c310f1c0b78bf7fc31a08b3441ca5b533e3eca6b256017962fcbeb9d7a1"} Dec 12 00:12:47 crc kubenswrapper[4917]: I1212 00:12:47.115637 4917 scope.go:117] "RemoveContainer" containerID="690ce27f806975eded134aa8cb8dc25d264ef84fde40707d094da7187fbbd0a3" Dec 12 00:12:47 crc kubenswrapper[4917]: I1212 00:12:47.115686 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-th48g" Dec 12 00:12:47 crc kubenswrapper[4917]: I1212 00:12:47.157790 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-th48g"] Dec 12 00:12:47 crc kubenswrapper[4917]: I1212 00:12:47.163323 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-th48g"] Dec 12 00:12:47 crc kubenswrapper[4917]: I1212 00:12:47.609670 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4662d0-8501-4627-81b8-fdfffff90309" path="/var/lib/kubelet/pods/ac4662d0-8501-4627-81b8-fdfffff90309/volumes" Dec 12 00:12:48 crc kubenswrapper[4917]: I1212 00:12:48.125310 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbwr" event={"ID":"f4e49af0-4a50-42ce-af81-a397919c9df2","Type":"ContainerStarted","Data":"5f6eef5d872d53d52c6380cc6a1fe394f09b2a3e09a5f4413cb17658444979ba"} Dec 12 00:12:48 crc kubenswrapper[4917]: I1212 00:12:48.129152 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccdts" event={"ID":"4ba83efd-fe91-4187-ba9a-ee464371ba30","Type":"ContainerStarted","Data":"69e9e01e758d2a7f998c1b44408855375c57b47679edd88febe1abd5d3f6276f"} Dec 12 00:12:48 crc kubenswrapper[4917]: I1212 00:12:48.131145 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79gn" event={"ID":"67ac8eda-571d-4185-b4ee-e6eff7fab229","Type":"ContainerStarted","Data":"b133cca38230fe396282d121ff0483d86b1b2c71d889598bea9d2a2af138e07a"} Dec 12 00:12:48 crc kubenswrapper[4917]: I1212 00:12:48.133089 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqshb" event={"ID":"5ace861f-ace6-42d1-a717-37da333aed72","Type":"ContainerStarted","Data":"349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c"} Dec 12 00:12:48 crc kubenswrapper[4917]: I1212 00:12:48.164377 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tqshb" podStartSLOduration=10.522874973 podStartE2EDuration="43.164356267s" podCreationTimestamp="2025-12-12 00:12:05 +0000 UTC" firstStartedPulling="2025-12-12 00:12:06.803924318 +0000 UTC m=+361.581725131" lastFinishedPulling="2025-12-12 00:12:39.445405572 +0000 UTC m=+394.223206425" observedRunningTime="2025-12-12 00:12:48.162528359 +0000 UTC m=+402.940329182" watchObservedRunningTime="2025-12-12 00:12:48.164356267 +0000 UTC m=+402.942157080" Dec 12 00:12:48 crc kubenswrapper[4917]: I1212 00:12:48.164742 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hcbwr" podStartSLOduration=3.428460297 podStartE2EDuration="43.164736358s" podCreationTimestamp="2025-12-12 00:12:05 +0000 UTC" firstStartedPulling="2025-12-12 00:12:06.784111126 +0000 UTC m=+361.561911939" lastFinishedPulling="2025-12-12 00:12:46.520387137 +0000 UTC m=+401.298188000" observedRunningTime="2025-12-12 00:12:48.146306213 +0000 UTC m=+402.924107026" watchObservedRunningTime="2025-12-12 00:12:48.164736358 +0000 UTC m=+402.942537171" Dec 12 00:12:48 crc kubenswrapper[4917]: I1212 00:12:48.187579 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ccdts" podStartSLOduration=6.090096884 podStartE2EDuration="45.187558472s" podCreationTimestamp="2025-12-12 00:12:03 +0000 UTC" firstStartedPulling="2025-12-12 00:12:06.789907502 +0000 UTC m=+361.567708315" lastFinishedPulling="2025-12-12 00:12:45.88736909 +0000 UTC m=+400.665169903" observedRunningTime="2025-12-12 00:12:48.182387473 +0000 UTC m=+402.960188296" watchObservedRunningTime="2025-12-12 00:12:48.187558472 +0000 UTC m=+402.965359275" Dec 12 00:12:48 crc kubenswrapper[4917]: I1212 00:12:48.207995 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p79gn" podStartSLOduration=6.454144857 podStartE2EDuration="46.207964951s" podCreationTimestamp="2025-12-12 00:12:02 +0000 UTC" firstStartedPulling="2025-12-12 00:12:06.784108536 +0000 UTC m=+361.561909349" lastFinishedPulling="2025-12-12 00:12:46.53792863 +0000 UTC m=+401.315729443" observedRunningTime="2025-12-12 00:12:48.207163019 +0000 UTC m=+402.984963842" watchObservedRunningTime="2025-12-12 00:12:48.207964951 +0000 UTC m=+402.985765764" Dec 12 00:12:53 crc kubenswrapper[4917]: I1212 00:12:53.342948 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:53 crc kubenswrapper[4917]: I1212 00:12:53.343320 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:53 crc kubenswrapper[4917]: I1212 00:12:53.392944 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:53 crc kubenswrapper[4917]: I1212 00:12:53.531884 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:53 crc kubenswrapper[4917]: I1212 00:12:53.531973 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:53 crc kubenswrapper[4917]: I1212 00:12:53.582927 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:54 crc kubenswrapper[4917]: I1212 00:12:54.202160 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p79gn" Dec 12 00:12:54 crc kubenswrapper[4917]: I1212 00:12:54.202899 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ccdts" Dec 12 00:12:55 crc kubenswrapper[4917]: I1212 00:12:55.737452 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:55 crc kubenswrapper[4917]: I1212 00:12:55.738748 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:55 crc kubenswrapper[4917]: I1212 00:12:55.806581 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:55 crc kubenswrapper[4917]: I1212 00:12:55.911840 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:55 crc kubenswrapper[4917]: I1212 00:12:55.911905 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:55 crc kubenswrapper[4917]: I1212 00:12:55.949693 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:56 crc kubenswrapper[4917]: I1212 00:12:56.252046 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:12:56 crc kubenswrapper[4917]: I1212 00:12:56.265208 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:12:59 crc kubenswrapper[4917]: I1212 00:12:59.639632 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:12:59 crc kubenswrapper[4917]: I1212 00:12:59.640374 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:13:29 crc kubenswrapper[4917]: I1212 00:13:29.639818 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:13:29 crc kubenswrapper[4917]: I1212 00:13:29.640464 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:13:29 crc kubenswrapper[4917]: I1212 00:13:29.640530 4917 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:13:29 crc kubenswrapper[4917]: I1212 00:13:29.641438 4917 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b7c442f4c3460c05c98aadf1500d6fe9cd23a4e533cf7d6262e8d9432e3dd4c"} pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 00:13:29 crc kubenswrapper[4917]: I1212 00:13:29.641516 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" containerID="cri-o://6b7c442f4c3460c05c98aadf1500d6fe9cd23a4e533cf7d6262e8d9432e3dd4c" gracePeriod=600 Dec 12 00:13:30 crc kubenswrapper[4917]: I1212 00:13:30.387388 4917 generic.go:334] "Generic (PLEG): container finished" podID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerID="6b7c442f4c3460c05c98aadf1500d6fe9cd23a4e533cf7d6262e8d9432e3dd4c" exitCode=0 Dec 12 00:13:30 crc kubenswrapper[4917]: I1212 00:13:30.387490 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerDied","Data":"6b7c442f4c3460c05c98aadf1500d6fe9cd23a4e533cf7d6262e8d9432e3dd4c"} Dec 12 00:13:30 crc kubenswrapper[4917]: I1212 00:13:30.388060 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"609179966f68944db1a8faa727ef80c93ed9a65e3cfff2bc35c173ceb0b60e5c"} Dec 12 00:13:30 crc kubenswrapper[4917]: I1212 00:13:30.388110 4917 scope.go:117] "RemoveContainer" containerID="9edce719905125f68295d2fe9c0b06b43d8acb7bf90b7876751ed187433af7eb" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.177766 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f"] Dec 12 00:15:00 crc kubenswrapper[4917]: E1212 00:15:00.180373 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4662d0-8501-4627-81b8-fdfffff90309" containerName="registry" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.180471 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4662d0-8501-4627-81b8-fdfffff90309" containerName="registry" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.180745 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4662d0-8501-4627-81b8-fdfffff90309" containerName="registry" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.181402 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.183877 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.184515 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.191445 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f"] Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.359038 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmq2f\" (UniqueName: \"kubernetes.io/projected/a3b131de-26a5-41fa-b227-5debefaed5ee-kube-api-access-kmq2f\") pod \"collect-profiles-29424975-dk68f\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.359086 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3b131de-26a5-41fa-b227-5debefaed5ee-secret-volume\") pod \"collect-profiles-29424975-dk68f\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.359171 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3b131de-26a5-41fa-b227-5debefaed5ee-config-volume\") pod \"collect-profiles-29424975-dk68f\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.460445 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3b131de-26a5-41fa-b227-5debefaed5ee-secret-volume\") pod \"collect-profiles-29424975-dk68f\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.460568 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3b131de-26a5-41fa-b227-5debefaed5ee-config-volume\") pod \"collect-profiles-29424975-dk68f\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.460614 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmq2f\" (UniqueName: \"kubernetes.io/projected/a3b131de-26a5-41fa-b227-5debefaed5ee-kube-api-access-kmq2f\") pod \"collect-profiles-29424975-dk68f\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.461770 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3b131de-26a5-41fa-b227-5debefaed5ee-config-volume\") pod \"collect-profiles-29424975-dk68f\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.468467 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3b131de-26a5-41fa-b227-5debefaed5ee-secret-volume\") pod \"collect-profiles-29424975-dk68f\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.478035 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmq2f\" (UniqueName: \"kubernetes.io/projected/a3b131de-26a5-41fa-b227-5debefaed5ee-kube-api-access-kmq2f\") pod \"collect-profiles-29424975-dk68f\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.504720 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.700601 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f"] Dec 12 00:15:00 crc kubenswrapper[4917]: I1212 00:15:00.917464 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" event={"ID":"a3b131de-26a5-41fa-b227-5debefaed5ee","Type":"ContainerStarted","Data":"d38212066c45eb4b61a656673cfdd59ba5ce418bef12636b78f27385a6f7e4d5"} Dec 12 00:15:01 crc kubenswrapper[4917]: I1212 00:15:01.927984 4917 generic.go:334] "Generic (PLEG): container finished" podID="a3b131de-26a5-41fa-b227-5debefaed5ee" containerID="9d4cfc3bfe7326041195332f2f7cafeac1304502bb59279d114b7ae4479760d4" exitCode=0 Dec 12 00:15:01 crc kubenswrapper[4917]: I1212 00:15:01.928506 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" event={"ID":"a3b131de-26a5-41fa-b227-5debefaed5ee","Type":"ContainerDied","Data":"9d4cfc3bfe7326041195332f2f7cafeac1304502bb59279d114b7ae4479760d4"} Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.160796 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.305939 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmq2f\" (UniqueName: \"kubernetes.io/projected/a3b131de-26a5-41fa-b227-5debefaed5ee-kube-api-access-kmq2f\") pod \"a3b131de-26a5-41fa-b227-5debefaed5ee\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.306090 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3b131de-26a5-41fa-b227-5debefaed5ee-config-volume\") pod \"a3b131de-26a5-41fa-b227-5debefaed5ee\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.306131 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3b131de-26a5-41fa-b227-5debefaed5ee-secret-volume\") pod \"a3b131de-26a5-41fa-b227-5debefaed5ee\" (UID: \"a3b131de-26a5-41fa-b227-5debefaed5ee\") " Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.306683 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3b131de-26a5-41fa-b227-5debefaed5ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "a3b131de-26a5-41fa-b227-5debefaed5ee" (UID: "a3b131de-26a5-41fa-b227-5debefaed5ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.312540 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3b131de-26a5-41fa-b227-5debefaed5ee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a3b131de-26a5-41fa-b227-5debefaed5ee" (UID: "a3b131de-26a5-41fa-b227-5debefaed5ee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.313012 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b131de-26a5-41fa-b227-5debefaed5ee-kube-api-access-kmq2f" (OuterVolumeSpecName: "kube-api-access-kmq2f") pod "a3b131de-26a5-41fa-b227-5debefaed5ee" (UID: "a3b131de-26a5-41fa-b227-5debefaed5ee"). InnerVolumeSpecName "kube-api-access-kmq2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.408042 4917 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3b131de-26a5-41fa-b227-5debefaed5ee-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.408120 4917 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3b131de-26a5-41fa-b227-5debefaed5ee-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.408134 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmq2f\" (UniqueName: \"kubernetes.io/projected/a3b131de-26a5-41fa-b227-5debefaed5ee-kube-api-access-kmq2f\") on node \"crc\" DevicePath \"\"" Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.941825 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" event={"ID":"a3b131de-26a5-41fa-b227-5debefaed5ee","Type":"ContainerDied","Data":"d38212066c45eb4b61a656673cfdd59ba5ce418bef12636b78f27385a6f7e4d5"} Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.941876 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d38212066c45eb4b61a656673cfdd59ba5ce418bef12636b78f27385a6f7e4d5" Dec 12 00:15:03 crc kubenswrapper[4917]: I1212 00:15:03.941906 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424975-dk68f" Dec 12 00:15:29 crc kubenswrapper[4917]: I1212 00:15:29.640068 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:15:29 crc kubenswrapper[4917]: I1212 00:15:29.640590 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:15:59 crc kubenswrapper[4917]: I1212 00:15:59.640010 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:15:59 crc kubenswrapper[4917]: I1212 00:15:59.640577 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:16:29 crc kubenswrapper[4917]: I1212 00:16:29.639133 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:16:29 crc kubenswrapper[4917]: I1212 00:16:29.639667 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:16:29 crc kubenswrapper[4917]: I1212 00:16:29.639730 4917 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:16:29 crc kubenswrapper[4917]: I1212 00:16:29.640406 4917 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"609179966f68944db1a8faa727ef80c93ed9a65e3cfff2bc35c173ceb0b60e5c"} pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 00:16:29 crc kubenswrapper[4917]: I1212 00:16:29.640476 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" containerID="cri-o://609179966f68944db1a8faa727ef80c93ed9a65e3cfff2bc35c173ceb0b60e5c" gracePeriod=600 Dec 12 00:16:30 crc kubenswrapper[4917]: I1212 00:16:30.422838 4917 generic.go:334] "Generic (PLEG): container finished" podID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerID="609179966f68944db1a8faa727ef80c93ed9a65e3cfff2bc35c173ceb0b60e5c" exitCode=0 Dec 12 00:16:30 crc kubenswrapper[4917]: I1212 00:16:30.422922 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerDied","Data":"609179966f68944db1a8faa727ef80c93ed9a65e3cfff2bc35c173ceb0b60e5c"} Dec 12 00:16:30 crc kubenswrapper[4917]: I1212 00:16:30.423419 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"96ec4d8e61f5bcbe03d7050d140b399f2045053de88e96d003dcf4d699ca9b59"} Dec 12 00:16:30 crc kubenswrapper[4917]: I1212 00:16:30.423444 4917 scope.go:117] "RemoveContainer" containerID="6b7c442f4c3460c05c98aadf1500d6fe9cd23a4e533cf7d6262e8d9432e3dd4c" Dec 12 00:18:01 crc kubenswrapper[4917]: I1212 00:18:01.531428 4917 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pv86t container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 00:18:01 crc kubenswrapper[4917]: I1212 00:18:01.531554 4917 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pv86t container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 00:18:01 crc kubenswrapper[4917]: I1212 00:18:01.531917 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" podUID="be440828-8884-4cd3-b30e-4eba825caa3b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 00:18:01 crc kubenswrapper[4917]: I1212 00:18:01.532014 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pv86t" podUID="be440828-8884-4cd3-b30e-4eba825caa3b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 00:18:05 crc kubenswrapper[4917]: I1212 00:18:05.937668 4917 scope.go:117] "RemoveContainer" containerID="0a53b8523767415f9d703a5623f7d0464d0a30530736498ec770d6739a8910d2" Dec 12 00:18:15 crc kubenswrapper[4917]: I1212 00:18:15.211743 4917 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 00:18:27 crc kubenswrapper[4917]: I1212 00:18:27.107598 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-26hjd"] Dec 12 00:18:27 crc kubenswrapper[4917]: I1212 00:18:27.108409 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovn-controller" containerID="cri-o://040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03" gracePeriod=30 Dec 12 00:18:27 crc kubenswrapper[4917]: I1212 00:18:27.108499 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="nbdb" containerID="cri-o://ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e" gracePeriod=30 Dec 12 00:18:27 crc kubenswrapper[4917]: I1212 00:18:27.108558 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="sbdb" containerID="cri-o://ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19" gracePeriod=30 Dec 12 00:18:27 crc kubenswrapper[4917]: I1212 00:18:27.108569 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="kube-rbac-proxy-node" containerID="cri-o://dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e" gracePeriod=30 Dec 12 00:18:27 crc kubenswrapper[4917]: I1212 00:18:27.108672 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7" gracePeriod=30 Dec 12 00:18:27 crc kubenswrapper[4917]: I1212 00:18:27.108703 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="northd" containerID="cri-o://254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7" gracePeriod=30 Dec 12 00:18:27 crc kubenswrapper[4917]: I1212 00:18:27.108769 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovn-acl-logging" containerID="cri-o://67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa" gracePeriod=30 Dec 12 00:18:27 crc kubenswrapper[4917]: I1212 00:18:27.147624 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" containerID="cri-o://9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52" gracePeriod=30 Dec 12 00:18:28 crc kubenswrapper[4917]: I1212 00:18:28.083010 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/2.log" Dec 12 00:18:28 crc kubenswrapper[4917]: I1212 00:18:28.085509 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovn-acl-logging/0.log" Dec 12 00:18:28 crc kubenswrapper[4917]: I1212 00:18:28.086158 4917 generic.go:334] "Generic (PLEG): container finished" podID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerID="67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa" exitCode=143 Dec 12 00:18:28 crc kubenswrapper[4917]: I1212 00:18:28.086191 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa"} Dec 12 00:18:29 crc kubenswrapper[4917]: I1212 00:18:29.638984 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:18:29 crc kubenswrapper[4917]: I1212 00:18:29.639054 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:18:30 crc kubenswrapper[4917]: E1212 00:18:30.741393 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e is running failed: container process not found" containerID="ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 12 00:18:30 crc kubenswrapper[4917]: E1212 00:18:30.741440 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19 is running failed: container process not found" containerID="ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 12 00:18:30 crc kubenswrapper[4917]: E1212 00:18:30.742297 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e is running failed: container process not found" containerID="ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 12 00:18:30 crc kubenswrapper[4917]: E1212 00:18:30.742409 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19 is running failed: container process not found" containerID="ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 12 00:18:30 crc kubenswrapper[4917]: E1212 00:18:30.742558 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e is running failed: container process not found" containerID="ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 12 00:18:30 crc kubenswrapper[4917]: E1212 00:18:30.742587 4917 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="nbdb" Dec 12 00:18:30 crc kubenswrapper[4917]: E1212 00:18:30.742854 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19 is running failed: container process not found" containerID="ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 12 00:18:30 crc kubenswrapper[4917]: E1212 00:18:30.742877 4917 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="sbdb" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.102560 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24mnq_7ee00e08-bb29-427d-9de3-6b0616e409fe/kube-multus/1.log" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.103135 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24mnq_7ee00e08-bb29-427d-9de3-6b0616e409fe/kube-multus/0.log" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.103192 4917 generic.go:334] "Generic (PLEG): container finished" podID="7ee00e08-bb29-427d-9de3-6b0616e409fe" containerID="0c68257e5dd1d97628cb53c884e963ded61b1a597be47717aceb3b97fde8f979" exitCode=2 Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.103243 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24mnq" event={"ID":"7ee00e08-bb29-427d-9de3-6b0616e409fe","Type":"ContainerDied","Data":"0c68257e5dd1d97628cb53c884e963ded61b1a597be47717aceb3b97fde8f979"} Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.103310 4917 scope.go:117] "RemoveContainer" containerID="81df9e2f72ca34972c53db1d905fe810c618940b493607c9b6ad10aaba7aafb4" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.103879 4917 scope.go:117] "RemoveContainer" containerID="0c68257e5dd1d97628cb53c884e963ded61b1a597be47717aceb3b97fde8f979" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.569228 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/2.log" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.572146 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovn-acl-logging/0.log" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.572767 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovn-controller/0.log" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.573230 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.623655 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dgccd"] Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.623995 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovn-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624019 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovn-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624032 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="sbdb" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624039 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="sbdb" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624052 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="northd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624059 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="northd" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624068 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="kubecfg-setup" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624075 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="kubecfg-setup" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624082 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624089 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624098 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624103 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624113 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624119 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624127 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b131de-26a5-41fa-b227-5debefaed5ee" containerName="collect-profiles" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624133 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b131de-26a5-41fa-b227-5debefaed5ee" containerName="collect-profiles" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624143 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="kube-rbac-proxy-node" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624150 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="kube-rbac-proxy-node" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624156 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="nbdb" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624161 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="nbdb" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624167 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovn-acl-logging" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624174 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovn-acl-logging" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624277 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="sbdb" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624286 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="kube-rbac-proxy-node" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624294 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="northd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624300 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovn-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624309 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="nbdb" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624318 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624328 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovn-acl-logging" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624336 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624344 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624349 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624359 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b131de-26a5-41fa-b227-5debefaed5ee" containerName="collect-profiles" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624525 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624535 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.624544 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624550 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.624660 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerName="ovnkube-controller" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.626761 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.628197 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovnkube-controller/2.log" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.633540 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovn-acl-logging/0.log" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634122 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-26hjd_c740630c-23cb-4c02-ab4e-bac3d773dce4/ovn-controller/0.log" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634538 4917 generic.go:334] "Generic (PLEG): container finished" podID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerID="9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52" exitCode=0 Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634561 4917 generic.go:334] "Generic (PLEG): container finished" podID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerID="254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7" exitCode=0 Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634571 4917 generic.go:334] "Generic (PLEG): container finished" podID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerID="ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7" exitCode=0 Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634578 4917 generic.go:334] "Generic (PLEG): container finished" podID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerID="dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e" exitCode=0 Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634586 4917 generic.go:334] "Generic (PLEG): container finished" podID="c740630c-23cb-4c02-ab4e-bac3d773dce4" containerID="040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03" exitCode=143 Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634611 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52"} Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634639 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7"} Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634666 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7"} Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634677 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e"} Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634688 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03"} Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.634707 4917 scope.go:117] "RemoveContainer" containerID="9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646271 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-kubelet\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646344 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-run-ovn\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646374 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-run-systemd\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646427 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-cni-netd\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646466 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwrq\" (UniqueName: \"kubernetes.io/projected/d6185340-ee05-435d-a2f2-3e0f89ffeddd-kube-api-access-jdwrq\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646498 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646525 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-run-openvswitch\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646557 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-etc-openvswitch\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646588 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6185340-ee05-435d-a2f2-3e0f89ffeddd-env-overrides\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646625 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-run-netns\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646668 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-var-lib-openvswitch\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646695 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-systemd-units\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646734 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-cni-bin\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646763 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-log-socket\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646796 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6185340-ee05-435d-a2f2-3e0f89ffeddd-ovn-node-metrics-cert\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646819 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6185340-ee05-435d-a2f2-3e0f89ffeddd-ovnkube-config\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646851 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-run-ovn-kubernetes\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646873 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-node-log\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646895 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-slash\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.646921 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6185340-ee05-435d-a2f2-3e0f89ffeddd-ovnkube-script-lib\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.677513 4917 scope.go:117] "RemoveContainer" containerID="25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.692607 4917 scope.go:117] "RemoveContainer" containerID="ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.719109 4917 scope.go:117] "RemoveContainer" containerID="ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.734040 4917 scope.go:117] "RemoveContainer" containerID="254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.748015 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-openvswitch\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.749572 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752058 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-etc-openvswitch\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752299 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-systemd-units\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752428 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-netns\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752557 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-script-lib\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752679 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-ovn-kubernetes\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.748251 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752842 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-ovn\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.749629 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752099 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752333 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752469 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752776 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752935 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752911 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-netd\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.752995 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-log-socket\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753050 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovn-node-metrics-cert\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753073 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-kubelet\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753114 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-bin\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753145 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-slash\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753144 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-log-socket" (OuterVolumeSpecName: "log-socket") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753175 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9gct\" (UniqueName: \"kubernetes.io/projected/c740630c-23cb-4c02-ab4e-bac3d773dce4-kube-api-access-k9gct\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753212 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753224 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753247 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-slash" (OuterVolumeSpecName: "host-slash") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753337 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-config\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753388 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-node-log\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753409 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-env-overrides\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753424 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-systemd\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753420 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-node-log" (OuterVolumeSpecName: "node-log") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753450 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-var-lib-openvswitch\") pod \"c740630c-23cb-4c02-ab4e-bac3d773dce4\" (UID: \"c740630c-23cb-4c02-ab4e-bac3d773dce4\") " Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753451 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753698 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-run-ovn-kubernetes\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753727 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-node-log\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753746 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-slash\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753771 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6185340-ee05-435d-a2f2-3e0f89ffeddd-ovnkube-script-lib\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753805 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-run-ovn-kubernetes\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753818 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753835 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-kubelet\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753871 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-kubelet\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753876 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-slash\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753915 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-node-log\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753950 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-run-ovn\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.753981 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-cni-netd\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754000 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-run-systemd\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754059 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdwrq\" (UniqueName: \"kubernetes.io/projected/d6185340-ee05-435d-a2f2-3e0f89ffeddd-kube-api-access-jdwrq\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754094 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754115 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-run-openvswitch\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754146 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-run-systemd\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754171 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-etc-openvswitch\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754152 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-etc-openvswitch\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754222 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6185340-ee05-435d-a2f2-3e0f89ffeddd-env-overrides\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754272 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-run-netns\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754302 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-var-lib-openvswitch\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754328 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-systemd-units\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754377 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-cni-bin\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754418 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-log-socket\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754464 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6185340-ee05-435d-a2f2-3e0f89ffeddd-ovn-node-metrics-cert\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754528 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6185340-ee05-435d-a2f2-3e0f89ffeddd-ovnkube-config\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754542 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754609 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6185340-ee05-435d-a2f2-3e0f89ffeddd-ovnkube-script-lib\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754631 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-log-socket\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754677 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-run-ovn\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754683 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-systemd-units\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754705 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-cni-bin\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754736 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-run-openvswitch\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754758 4917 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.754970 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755005 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-cni-netd\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755037 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-host-run-netns\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755064 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6185340-ee05-435d-a2f2-3e0f89ffeddd-var-lib-openvswitch\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755084 4917 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755101 4917 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755114 4917 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755127 4917 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755140 4917 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755155 4917 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755168 4917 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-log-socket\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755180 4917 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755191 4917 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755203 4917 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-host-slash\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755216 4917 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-node-log\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755217 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6185340-ee05-435d-a2f2-3e0f89ffeddd-ovnkube-config\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755228 4917 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755317 4917 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.755682 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6185340-ee05-435d-a2f2-3e0f89ffeddd-env-overrides\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.756025 4917 scope.go:117] "RemoveContainer" containerID="ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.756399 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.756574 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.760944 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6185340-ee05-435d-a2f2-3e0f89ffeddd-ovn-node-metrics-cert\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.768196 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c740630c-23cb-4c02-ab4e-bac3d773dce4-kube-api-access-k9gct" (OuterVolumeSpecName: "kube-api-access-k9gct") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "kube-api-access-k9gct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.770114 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.772757 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdwrq\" (UniqueName: \"kubernetes.io/projected/d6185340-ee05-435d-a2f2-3e0f89ffeddd-kube-api-access-jdwrq\") pod \"ovnkube-node-dgccd\" (UID: \"d6185340-ee05-435d-a2f2-3e0f89ffeddd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.777931 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c740630c-23cb-4c02-ab4e-bac3d773dce4" (UID: "c740630c-23cb-4c02-ab4e-bac3d773dce4"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.811389 4917 scope.go:117] "RemoveContainer" containerID="dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.830709 4917 scope.go:117] "RemoveContainer" containerID="67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.843969 4917 scope.go:117] "RemoveContainer" containerID="040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.858191 4917 scope.go:117] "RemoveContainer" containerID="958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.858745 4917 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.858787 4917 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.858800 4917 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c740630c-23cb-4c02-ab4e-bac3d773dce4-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.858811 4917 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.858824 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9gct\" (UniqueName: \"kubernetes.io/projected/c740630c-23cb-4c02-ab4e-bac3d773dce4-kube-api-access-k9gct\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.858837 4917 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c740630c-23cb-4c02-ab4e-bac3d773dce4-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.873765 4917 scope.go:117] "RemoveContainer" containerID="9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.874315 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52\": container with ID starting with 9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52 not found: ID does not exist" containerID="9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.874345 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52"} err="failed to get container status \"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52\": rpc error: code = NotFound desc = could not find container \"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52\": container with ID starting with 9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.874366 4917 scope.go:117] "RemoveContainer" containerID="25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.875233 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\": container with ID starting with 25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15 not found: ID does not exist" containerID="25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.875331 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15"} err="failed to get container status \"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\": rpc error: code = NotFound desc = could not find container \"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\": container with ID starting with 25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.875423 4917 scope.go:117] "RemoveContainer" containerID="ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.876423 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\": container with ID starting with ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19 not found: ID does not exist" containerID="ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.876464 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19"} err="failed to get container status \"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\": rpc error: code = NotFound desc = could not find container \"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\": container with ID starting with ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.876487 4917 scope.go:117] "RemoveContainer" containerID="ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.876904 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\": container with ID starting with ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e not found: ID does not exist" containerID="ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.876961 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e"} err="failed to get container status \"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\": rpc error: code = NotFound desc = could not find container \"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\": container with ID starting with ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.877193 4917 scope.go:117] "RemoveContainer" containerID="254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.877548 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\": container with ID starting with 254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7 not found: ID does not exist" containerID="254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.877582 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7"} err="failed to get container status \"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\": rpc error: code = NotFound desc = could not find container \"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\": container with ID starting with 254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.877610 4917 scope.go:117] "RemoveContainer" containerID="ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.877981 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\": container with ID starting with ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7 not found: ID does not exist" containerID="ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.878096 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7"} err="failed to get container status \"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\": rpc error: code = NotFound desc = could not find container \"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\": container with ID starting with ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.878129 4917 scope.go:117] "RemoveContainer" containerID="dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.878446 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\": container with ID starting with dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e not found: ID does not exist" containerID="dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.878477 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e"} err="failed to get container status \"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\": rpc error: code = NotFound desc = could not find container \"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\": container with ID starting with dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.878498 4917 scope.go:117] "RemoveContainer" containerID="67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.878826 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\": container with ID starting with 67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa not found: ID does not exist" containerID="67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.878889 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa"} err="failed to get container status \"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\": rpc error: code = NotFound desc = could not find container \"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\": container with ID starting with 67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.878913 4917 scope.go:117] "RemoveContainer" containerID="040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.880154 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\": container with ID starting with 040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03 not found: ID does not exist" containerID="040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.880190 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03"} err="failed to get container status \"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\": rpc error: code = NotFound desc = could not find container \"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\": container with ID starting with 040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.880209 4917 scope.go:117] "RemoveContainer" containerID="958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51" Dec 12 00:18:31 crc kubenswrapper[4917]: E1212 00:18:31.882075 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\": container with ID starting with 958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51 not found: ID does not exist" containerID="958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.882794 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51"} err="failed to get container status \"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\": rpc error: code = NotFound desc = could not find container \"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\": container with ID starting with 958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.882869 4917 scope.go:117] "RemoveContainer" containerID="9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.883277 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52"} err="failed to get container status \"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52\": rpc error: code = NotFound desc = could not find container \"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52\": container with ID starting with 9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.883299 4917 scope.go:117] "RemoveContainer" containerID="25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.883605 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15"} err="failed to get container status \"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\": rpc error: code = NotFound desc = could not find container \"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\": container with ID starting with 25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.883626 4917 scope.go:117] "RemoveContainer" containerID="ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.884507 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19"} err="failed to get container status \"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\": rpc error: code = NotFound desc = could not find container \"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\": container with ID starting with ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.884539 4917 scope.go:117] "RemoveContainer" containerID="ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.884895 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e"} err="failed to get container status \"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\": rpc error: code = NotFound desc = could not find container \"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\": container with ID starting with ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.884925 4917 scope.go:117] "RemoveContainer" containerID="254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.885309 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7"} err="failed to get container status \"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\": rpc error: code = NotFound desc = could not find container \"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\": container with ID starting with 254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.885333 4917 scope.go:117] "RemoveContainer" containerID="ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.885882 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7"} err="failed to get container status \"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\": rpc error: code = NotFound desc = could not find container \"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\": container with ID starting with ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.885913 4917 scope.go:117] "RemoveContainer" containerID="dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.886211 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e"} err="failed to get container status \"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\": rpc error: code = NotFound desc = could not find container \"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\": container with ID starting with dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.886232 4917 scope.go:117] "RemoveContainer" containerID="67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.886448 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa"} err="failed to get container status \"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\": rpc error: code = NotFound desc = could not find container \"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\": container with ID starting with 67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.886473 4917 scope.go:117] "RemoveContainer" containerID="040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.886774 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03"} err="failed to get container status \"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\": rpc error: code = NotFound desc = could not find container \"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\": container with ID starting with 040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.886802 4917 scope.go:117] "RemoveContainer" containerID="958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.887032 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51"} err="failed to get container status \"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\": rpc error: code = NotFound desc = could not find container \"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\": container with ID starting with 958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.887054 4917 scope.go:117] "RemoveContainer" containerID="9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.887400 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52"} err="failed to get container status \"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52\": rpc error: code = NotFound desc = could not find container \"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52\": container with ID starting with 9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.887420 4917 scope.go:117] "RemoveContainer" containerID="25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.887658 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15"} err="failed to get container status \"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\": rpc error: code = NotFound desc = could not find container \"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\": container with ID starting with 25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.887687 4917 scope.go:117] "RemoveContainer" containerID="ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.887957 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19"} err="failed to get container status \"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\": rpc error: code = NotFound desc = could not find container \"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\": container with ID starting with ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.887988 4917 scope.go:117] "RemoveContainer" containerID="ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.888268 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e"} err="failed to get container status \"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\": rpc error: code = NotFound desc = could not find container \"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\": container with ID starting with ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.888294 4917 scope.go:117] "RemoveContainer" containerID="254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.888559 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7"} err="failed to get container status \"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\": rpc error: code = NotFound desc = could not find container \"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\": container with ID starting with 254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.888580 4917 scope.go:117] "RemoveContainer" containerID="ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.888825 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7"} err="failed to get container status \"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\": rpc error: code = NotFound desc = could not find container \"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\": container with ID starting with ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.888848 4917 scope.go:117] "RemoveContainer" containerID="dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.889150 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e"} err="failed to get container status \"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\": rpc error: code = NotFound desc = could not find container \"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\": container with ID starting with dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.889171 4917 scope.go:117] "RemoveContainer" containerID="67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.889438 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa"} err="failed to get container status \"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\": rpc error: code = NotFound desc = could not find container \"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\": container with ID starting with 67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.889463 4917 scope.go:117] "RemoveContainer" containerID="040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.889731 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03"} err="failed to get container status \"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\": rpc error: code = NotFound desc = could not find container \"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\": container with ID starting with 040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.889750 4917 scope.go:117] "RemoveContainer" containerID="958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.890010 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51"} err="failed to get container status \"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\": rpc error: code = NotFound desc = could not find container \"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\": container with ID starting with 958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.890047 4917 scope.go:117] "RemoveContainer" containerID="9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.890271 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52"} err="failed to get container status \"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52\": rpc error: code = NotFound desc = could not find container \"9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52\": container with ID starting with 9742e1babcc828413c3af57fac19902f27050fff3afa18593f3edb1ce8ea5b52 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.890292 4917 scope.go:117] "RemoveContainer" containerID="25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.890565 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15"} err="failed to get container status \"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\": rpc error: code = NotFound desc = could not find container \"25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15\": container with ID starting with 25eaa0fa1748adfa7b8c089f10108b4839ccf071c1d7a3174d71a45fad688d15 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.890583 4917 scope.go:117] "RemoveContainer" containerID="ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.890850 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19"} err="failed to get container status \"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\": rpc error: code = NotFound desc = could not find container \"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19\": container with ID starting with ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.890875 4917 scope.go:117] "RemoveContainer" containerID="ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.891137 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e"} err="failed to get container status \"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\": rpc error: code = NotFound desc = could not find container \"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e\": container with ID starting with ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.891171 4917 scope.go:117] "RemoveContainer" containerID="254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.891457 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7"} err="failed to get container status \"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\": rpc error: code = NotFound desc = could not find container \"254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7\": container with ID starting with 254965299f07b1777b540ed0c52e366a797bae1becae04e3e8f5a03fec9de0c7 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.891482 4917 scope.go:117] "RemoveContainer" containerID="ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.891868 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7"} err="failed to get container status \"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\": rpc error: code = NotFound desc = could not find container \"ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7\": container with ID starting with ad37074d127905e5de2bf13023f50ed506af1c74ba722f4a4bc8dae1f9f511d7 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.891897 4917 scope.go:117] "RemoveContainer" containerID="dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.892185 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e"} err="failed to get container status \"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\": rpc error: code = NotFound desc = could not find container \"dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e\": container with ID starting with dbcbecc1664900af6346060c1ee7387edeb995248c9e5a2aee2818ae5382815e not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.892218 4917 scope.go:117] "RemoveContainer" containerID="67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.892462 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa"} err="failed to get container status \"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\": rpc error: code = NotFound desc = could not find container \"67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa\": container with ID starting with 67ca7746710fc58609831d39b029e758bd95c691f2b76174d8d59398cf4847aa not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.892487 4917 scope.go:117] "RemoveContainer" containerID="040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.892919 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03"} err="failed to get container status \"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\": rpc error: code = NotFound desc = could not find container \"040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03\": container with ID starting with 040d5320d40f02b82e0e5f82509e9282ebbf3f6a51ce026952e34a31e5144f03 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.892950 4917 scope.go:117] "RemoveContainer" containerID="958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.893218 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51"} err="failed to get container status \"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\": rpc error: code = NotFound desc = could not find container \"958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51\": container with ID starting with 958374b7c54ea2c567e33efefb7a1163a5acddc8798ede3b7a128c8af2862c51 not found: ID does not exist" Dec 12 00:18:31 crc kubenswrapper[4917]: I1212 00:18:31.944878 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:31 crc kubenswrapper[4917]: W1212 00:18:31.966048 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6185340_ee05_435d_a2f2_3e0f89ffeddd.slice/crio-bb533e7a7749810e51474813ae107f78fad0b744a3ae2bbc5838ab5fc6cc545f WatchSource:0}: Error finding container bb533e7a7749810e51474813ae107f78fad0b744a3ae2bbc5838ab5fc6cc545f: Status 404 returned error can't find the container with id bb533e7a7749810e51474813ae107f78fad0b744a3ae2bbc5838ab5fc6cc545f Dec 12 00:18:32 crc kubenswrapper[4917]: I1212 00:18:32.641898 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"ef56c3dad011f6392b8a61f86557f38177cd2238751d2458998d03bef504da19"} Dec 12 00:18:32 crc kubenswrapper[4917]: I1212 00:18:32.642472 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"ba38738b11776268571fa83ce1cdc44d083ca1103188546f5a0cf39287a5285e"} Dec 12 00:18:32 crc kubenswrapper[4917]: I1212 00:18:32.642603 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" event={"ID":"c740630c-23cb-4c02-ab4e-bac3d773dce4","Type":"ContainerDied","Data":"9ee4e2737bcbde4bd936b5b422d0c41bba4e5fe97648ffd5b55c6b2a072c04a3"} Dec 12 00:18:32 crc kubenswrapper[4917]: I1212 00:18:32.641943 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-26hjd" Dec 12 00:18:32 crc kubenswrapper[4917]: I1212 00:18:32.643422 4917 generic.go:334] "Generic (PLEG): container finished" podID="d6185340-ee05-435d-a2f2-3e0f89ffeddd" containerID="01fa4154c08854850175c680b349ab99dc2f997785e0f55bea4a3fdb8c7a6853" exitCode=0 Dec 12 00:18:32 crc kubenswrapper[4917]: I1212 00:18:32.643487 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" event={"ID":"d6185340-ee05-435d-a2f2-3e0f89ffeddd","Type":"ContainerDied","Data":"01fa4154c08854850175c680b349ab99dc2f997785e0f55bea4a3fdb8c7a6853"} Dec 12 00:18:32 crc kubenswrapper[4917]: I1212 00:18:32.643514 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" event={"ID":"d6185340-ee05-435d-a2f2-3e0f89ffeddd","Type":"ContainerStarted","Data":"bb533e7a7749810e51474813ae107f78fad0b744a3ae2bbc5838ab5fc6cc545f"} Dec 12 00:18:32 crc kubenswrapper[4917]: I1212 00:18:32.647874 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-24mnq_7ee00e08-bb29-427d-9de3-6b0616e409fe/kube-multus/1.log" Dec 12 00:18:32 crc kubenswrapper[4917]: I1212 00:18:32.647923 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-24mnq" event={"ID":"7ee00e08-bb29-427d-9de3-6b0616e409fe","Type":"ContainerStarted","Data":"3a7b07c94a80d0cbb60a1d3e41e59ab6ad8142b8b93c61873f5dd8e11cbeaa6f"} Dec 12 00:18:32 crc kubenswrapper[4917]: I1212 00:18:32.729598 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-26hjd"] Dec 12 00:18:32 crc kubenswrapper[4917]: I1212 00:18:32.738553 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-26hjd"] Dec 12 00:18:33 crc kubenswrapper[4917]: I1212 00:18:33.610640 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c740630c-23cb-4c02-ab4e-bac3d773dce4" path="/var/lib/kubelet/pods/c740630c-23cb-4c02-ab4e-bac3d773dce4/volumes" Dec 12 00:18:33 crc kubenswrapper[4917]: I1212 00:18:33.655700 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" event={"ID":"d6185340-ee05-435d-a2f2-3e0f89ffeddd","Type":"ContainerStarted","Data":"f4438b8575999ed44159fbf64cf67be4fb6c01b3560a26c534a51b60cab8c9c6"} Dec 12 00:18:33 crc kubenswrapper[4917]: I1212 00:18:33.655935 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" event={"ID":"d6185340-ee05-435d-a2f2-3e0f89ffeddd","Type":"ContainerStarted","Data":"6bd637cad73062b6a3e49034a61af3c6df4b75fddec4d737d5ee9c835294e034"} Dec 12 00:18:33 crc kubenswrapper[4917]: I1212 00:18:33.655993 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" event={"ID":"d6185340-ee05-435d-a2f2-3e0f89ffeddd","Type":"ContainerStarted","Data":"42fee4ee4a7cf2b540c24b6bf53c3264b4fc63ab0f9a2830badb470eae6f4d26"} Dec 12 00:18:33 crc kubenswrapper[4917]: I1212 00:18:33.656077 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" event={"ID":"d6185340-ee05-435d-a2f2-3e0f89ffeddd","Type":"ContainerStarted","Data":"9f458d3e5e5d37e1146c03bc09a5cce15b0e7e2c6944d60c7a6be11e2c9c0b19"} Dec 12 00:18:33 crc kubenswrapper[4917]: I1212 00:18:33.656135 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" event={"ID":"d6185340-ee05-435d-a2f2-3e0f89ffeddd","Type":"ContainerStarted","Data":"bafd4a6c88f695b24114cd48d4ba677e17b0acf23ff2574190e8d712220b500c"} Dec 12 00:18:33 crc kubenswrapper[4917]: I1212 00:18:33.656189 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" event={"ID":"d6185340-ee05-435d-a2f2-3e0f89ffeddd","Type":"ContainerStarted","Data":"166fd0e5996f15642f87df0a18f9f4a0b39a7ce8b7db8d9a53caf8840034cb58"} Dec 12 00:18:36 crc kubenswrapper[4917]: I1212 00:18:36.675208 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" event={"ID":"d6185340-ee05-435d-a2f2-3e0f89ffeddd","Type":"ContainerStarted","Data":"86d94fe632b332704ae16b38d46a11a1105bcf0ed628149a62a7cb009abbe687"} Dec 12 00:18:39 crc kubenswrapper[4917]: I1212 00:18:39.697364 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" event={"ID":"d6185340-ee05-435d-a2f2-3e0f89ffeddd","Type":"ContainerStarted","Data":"4f8ca76eec25671fdaebb2d8a838b6a01f539a649c9c2d2dfd8c7e5bf21e1933"} Dec 12 00:18:39 crc kubenswrapper[4917]: I1212 00:18:39.697853 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:39 crc kubenswrapper[4917]: I1212 00:18:39.697915 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:39 crc kubenswrapper[4917]: I1212 00:18:39.697929 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:39 crc kubenswrapper[4917]: I1212 00:18:39.727177 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:39 crc kubenswrapper[4917]: I1212 00:18:39.730112 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" podStartSLOduration=8.730091011 podStartE2EDuration="8.730091011s" podCreationTimestamp="2025-12-12 00:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:18:39.727383138 +0000 UTC m=+754.505183961" watchObservedRunningTime="2025-12-12 00:18:39.730091011 +0000 UTC m=+754.507891824" Dec 12 00:18:39 crc kubenswrapper[4917]: I1212 00:18:39.731592 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:18:59 crc kubenswrapper[4917]: I1212 00:18:59.638868 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:18:59 crc kubenswrapper[4917]: I1212 00:18:59.639537 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:19:01 crc kubenswrapper[4917]: I1212 00:19:01.969628 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dgccd" Dec 12 00:19:29 crc kubenswrapper[4917]: I1212 00:19:29.639405 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:19:29 crc kubenswrapper[4917]: I1212 00:19:29.640046 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:19:29 crc kubenswrapper[4917]: I1212 00:19:29.640100 4917 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:19:29 crc kubenswrapper[4917]: I1212 00:19:29.640751 4917 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96ec4d8e61f5bcbe03d7050d140b399f2045053de88e96d003dcf4d699ca9b59"} pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 00:19:29 crc kubenswrapper[4917]: I1212 00:19:29.640813 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" containerID="cri-o://96ec4d8e61f5bcbe03d7050d140b399f2045053de88e96d003dcf4d699ca9b59" gracePeriod=600 Dec 12 00:19:31 crc kubenswrapper[4917]: I1212 00:19:31.263935 4917 generic.go:334] "Generic (PLEG): container finished" podID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerID="96ec4d8e61f5bcbe03d7050d140b399f2045053de88e96d003dcf4d699ca9b59" exitCode=0 Dec 12 00:19:31 crc kubenswrapper[4917]: I1212 00:19:31.263998 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerDied","Data":"96ec4d8e61f5bcbe03d7050d140b399f2045053de88e96d003dcf4d699ca9b59"} Dec 12 00:19:31 crc kubenswrapper[4917]: I1212 00:19:31.264515 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"810e6b0f2d007409666f46f2e8eac0cefab026671305efff02967dc13a6c6eec"} Dec 12 00:19:31 crc kubenswrapper[4917]: I1212 00:19:31.264538 4917 scope.go:117] "RemoveContainer" containerID="609179966f68944db1a8faa727ef80c93ed9a65e3cfff2bc35c173ceb0b60e5c" Dec 12 00:20:07 crc kubenswrapper[4917]: I1212 00:20:07.351229 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqshb"] Dec 12 00:20:07 crc kubenswrapper[4917]: I1212 00:20:07.352139 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tqshb" podUID="5ace861f-ace6-42d1-a717-37da333aed72" containerName="registry-server" containerID="cri-o://349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c" gracePeriod=30 Dec 12 00:20:07 crc kubenswrapper[4917]: I1212 00:20:07.998248 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.199622 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-catalog-content\") pod \"5ace861f-ace6-42d1-a717-37da333aed72\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.199730 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs2bc\" (UniqueName: \"kubernetes.io/projected/5ace861f-ace6-42d1-a717-37da333aed72-kube-api-access-xs2bc\") pod \"5ace861f-ace6-42d1-a717-37da333aed72\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.199760 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-utilities\") pod \"5ace861f-ace6-42d1-a717-37da333aed72\" (UID: \"5ace861f-ace6-42d1-a717-37da333aed72\") " Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.201796 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-utilities" (OuterVolumeSpecName: "utilities") pod "5ace861f-ace6-42d1-a717-37da333aed72" (UID: "5ace861f-ace6-42d1-a717-37da333aed72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.207155 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ace861f-ace6-42d1-a717-37da333aed72-kube-api-access-xs2bc" (OuterVolumeSpecName: "kube-api-access-xs2bc") pod "5ace861f-ace6-42d1-a717-37da333aed72" (UID: "5ace861f-ace6-42d1-a717-37da333aed72"). InnerVolumeSpecName "kube-api-access-xs2bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.221739 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ace861f-ace6-42d1-a717-37da333aed72" (UID: "5ace861f-ace6-42d1-a717-37da333aed72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.300608 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.300657 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs2bc\" (UniqueName: \"kubernetes.io/projected/5ace861f-ace6-42d1-a717-37da333aed72-kube-api-access-xs2bc\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.300669 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ace861f-ace6-42d1-a717-37da333aed72-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.461129 4917 generic.go:334] "Generic (PLEG): container finished" podID="5ace861f-ace6-42d1-a717-37da333aed72" containerID="349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c" exitCode=0 Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.461202 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqshb" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.461198 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqshb" event={"ID":"5ace861f-ace6-42d1-a717-37da333aed72","Type":"ContainerDied","Data":"349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c"} Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.462076 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqshb" event={"ID":"5ace861f-ace6-42d1-a717-37da333aed72","Type":"ContainerDied","Data":"280c8b5e237c01a64c09ebbda7e32789390aa45d394069928f2da47caf0e58d3"} Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.462101 4917 scope.go:117] "RemoveContainer" containerID="349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.494022 4917 scope.go:117] "RemoveContainer" containerID="fd8deda900b7f521ae9b5ed95536d8eed6f21d773cf5bdeeca63070a0772f7be" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.502140 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqshb"] Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.510014 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqshb"] Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.521335 4917 scope.go:117] "RemoveContainer" containerID="f1456d29de565e294f87ae20813c072fe61dddbe90eca2a3289c028baa6397bc" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.544603 4917 scope.go:117] "RemoveContainer" containerID="349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c" Dec 12 00:20:08 crc kubenswrapper[4917]: E1212 00:20:08.545059 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c\": container with ID starting with 349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c not found: ID does not exist" containerID="349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.545100 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c"} err="failed to get container status \"349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c\": rpc error: code = NotFound desc = could not find container \"349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c\": container with ID starting with 349fa45df105bf8396db6739eb0e5b72c4a1d8e04f6693865b432d165059e20c not found: ID does not exist" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.545128 4917 scope.go:117] "RemoveContainer" containerID="fd8deda900b7f521ae9b5ed95536d8eed6f21d773cf5bdeeca63070a0772f7be" Dec 12 00:20:08 crc kubenswrapper[4917]: E1212 00:20:08.545593 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8deda900b7f521ae9b5ed95536d8eed6f21d773cf5bdeeca63070a0772f7be\": container with ID starting with fd8deda900b7f521ae9b5ed95536d8eed6f21d773cf5bdeeca63070a0772f7be not found: ID does not exist" containerID="fd8deda900b7f521ae9b5ed95536d8eed6f21d773cf5bdeeca63070a0772f7be" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.545667 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8deda900b7f521ae9b5ed95536d8eed6f21d773cf5bdeeca63070a0772f7be"} err="failed to get container status \"fd8deda900b7f521ae9b5ed95536d8eed6f21d773cf5bdeeca63070a0772f7be\": rpc error: code = NotFound desc = could not find container \"fd8deda900b7f521ae9b5ed95536d8eed6f21d773cf5bdeeca63070a0772f7be\": container with ID starting with fd8deda900b7f521ae9b5ed95536d8eed6f21d773cf5bdeeca63070a0772f7be not found: ID does not exist" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.545730 4917 scope.go:117] "RemoveContainer" containerID="f1456d29de565e294f87ae20813c072fe61dddbe90eca2a3289c028baa6397bc" Dec 12 00:20:08 crc kubenswrapper[4917]: E1212 00:20:08.546169 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1456d29de565e294f87ae20813c072fe61dddbe90eca2a3289c028baa6397bc\": container with ID starting with f1456d29de565e294f87ae20813c072fe61dddbe90eca2a3289c028baa6397bc not found: ID does not exist" containerID="f1456d29de565e294f87ae20813c072fe61dddbe90eca2a3289c028baa6397bc" Dec 12 00:20:08 crc kubenswrapper[4917]: I1212 00:20:08.546209 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1456d29de565e294f87ae20813c072fe61dddbe90eca2a3289c028baa6397bc"} err="failed to get container status \"f1456d29de565e294f87ae20813c072fe61dddbe90eca2a3289c028baa6397bc\": rpc error: code = NotFound desc = could not find container \"f1456d29de565e294f87ae20813c072fe61dddbe90eca2a3289c028baa6397bc\": container with ID starting with f1456d29de565e294f87ae20813c072fe61dddbe90eca2a3289c028baa6397bc not found: ID does not exist" Dec 12 00:20:09 crc kubenswrapper[4917]: I1212 00:20:09.610562 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ace861f-ace6-42d1-a717-37da333aed72" path="/var/lib/kubelet/pods/5ace861f-ace6-42d1-a717-37da333aed72/volumes" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.174432 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj"] Dec 12 00:20:11 crc kubenswrapper[4917]: E1212 00:20:11.174800 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ace861f-ace6-42d1-a717-37da333aed72" containerName="registry-server" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.174817 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ace861f-ace6-42d1-a717-37da333aed72" containerName="registry-server" Dec 12 00:20:11 crc kubenswrapper[4917]: E1212 00:20:11.174851 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ace861f-ace6-42d1-a717-37da333aed72" containerName="extract-content" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.174858 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ace861f-ace6-42d1-a717-37da333aed72" containerName="extract-content" Dec 12 00:20:11 crc kubenswrapper[4917]: E1212 00:20:11.174871 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ace861f-ace6-42d1-a717-37da333aed72" containerName="extract-utilities" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.174881 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ace861f-ace6-42d1-a717-37da333aed72" containerName="extract-utilities" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.175023 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ace861f-ace6-42d1-a717-37da333aed72" containerName="registry-server" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.176129 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.179579 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.187447 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj"] Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.344355 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.344794 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.344836 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8jd4\" (UniqueName: \"kubernetes.io/projected/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-kube-api-access-b8jd4\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.446273 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8jd4\" (UniqueName: \"kubernetes.io/projected/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-kube-api-access-b8jd4\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.446359 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.446423 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.446874 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.446990 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.465271 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8jd4\" (UniqueName: \"kubernetes.io/projected/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-kube-api-access-b8jd4\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.506684 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:11 crc kubenswrapper[4917]: I1212 00:20:11.724191 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj"] Dec 12 00:20:12 crc kubenswrapper[4917]: I1212 00:20:12.498038 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" event={"ID":"11f5a0c1-fe52-4822-a95b-64e89e66d3c4","Type":"ContainerStarted","Data":"11aa2a51bf0da161b96a96beb806b53b5f4d58c2d029201449ed7675b2eaba1b"} Dec 12 00:20:12 crc kubenswrapper[4917]: I1212 00:20:12.498414 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" event={"ID":"11f5a0c1-fe52-4822-a95b-64e89e66d3c4","Type":"ContainerStarted","Data":"63f2ee86163593b7524080bb306bb2a8ca37c8c45885a6a9e611c2ea6d0015ce"} Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.505632 4917 generic.go:334] "Generic (PLEG): container finished" podID="11f5a0c1-fe52-4822-a95b-64e89e66d3c4" containerID="11aa2a51bf0da161b96a96beb806b53b5f4d58c2d029201449ed7675b2eaba1b" exitCode=0 Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.505725 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" event={"ID":"11f5a0c1-fe52-4822-a95b-64e89e66d3c4","Type":"ContainerDied","Data":"11aa2a51bf0da161b96a96beb806b53b5f4d58c2d029201449ed7675b2eaba1b"} Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.511758 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tshmq"] Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.512421 4917 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.513416 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.531870 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tshmq"] Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.679705 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-utilities\") pod \"redhat-operators-tshmq\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.679763 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-catalog-content\") pod \"redhat-operators-tshmq\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.679919 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wkkw\" (UniqueName: \"kubernetes.io/projected/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-kube-api-access-6wkkw\") pod \"redhat-operators-tshmq\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.781264 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-utilities\") pod \"redhat-operators-tshmq\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.781334 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-catalog-content\") pod \"redhat-operators-tshmq\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.781390 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wkkw\" (UniqueName: \"kubernetes.io/projected/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-kube-api-access-6wkkw\") pod \"redhat-operators-tshmq\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.782043 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-utilities\") pod \"redhat-operators-tshmq\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.782532 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-catalog-content\") pod \"redhat-operators-tshmq\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.819932 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wkkw\" (UniqueName: \"kubernetes.io/projected/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-kube-api-access-6wkkw\") pod \"redhat-operators-tshmq\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:13 crc kubenswrapper[4917]: I1212 00:20:13.829137 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:14 crc kubenswrapper[4917]: I1212 00:20:14.045355 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tshmq"] Dec 12 00:20:14 crc kubenswrapper[4917]: I1212 00:20:14.511618 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tshmq" event={"ID":"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4","Type":"ContainerStarted","Data":"d29ee9866863dea7da1a13dda7ba42fd6981da36d84f1407a23895c4a366ad71"} Dec 12 00:20:15 crc kubenswrapper[4917]: I1212 00:20:15.517045 4917 generic.go:334] "Generic (PLEG): container finished" podID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerID="8f25aeaf995062e654bee5ffe100cb1d2370fc4ca672fde0f661b072dc8a62ec" exitCode=0 Dec 12 00:20:15 crc kubenswrapper[4917]: I1212 00:20:15.517187 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tshmq" event={"ID":"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4","Type":"ContainerDied","Data":"8f25aeaf995062e654bee5ffe100cb1d2370fc4ca672fde0f661b072dc8a62ec"} Dec 12 00:20:19 crc kubenswrapper[4917]: I1212 00:20:19.539535 4917 generic.go:334] "Generic (PLEG): container finished" podID="11f5a0c1-fe52-4822-a95b-64e89e66d3c4" containerID="2c80e4d2b9da3e01339c449f246ad8616d43d7c6d9d566dbf6e45a5cfe30acdc" exitCode=0 Dec 12 00:20:19 crc kubenswrapper[4917]: I1212 00:20:19.539625 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" event={"ID":"11f5a0c1-fe52-4822-a95b-64e89e66d3c4","Type":"ContainerDied","Data":"2c80e4d2b9da3e01339c449f246ad8616d43d7c6d9d566dbf6e45a5cfe30acdc"} Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.547684 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" event={"ID":"11f5a0c1-fe52-4822-a95b-64e89e66d3c4","Type":"ContainerStarted","Data":"f68e52a90f77b3668587876555e65ebec46375358ed0b2100141e34dc1d4dadc"} Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.549487 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tshmq" event={"ID":"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4","Type":"ContainerStarted","Data":"d8c48c5737369a9008f19ac700e83a6baa275002a614ab3cc901b6efde78fe24"} Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.567768 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" podStartSLOduration=3.850572138 podStartE2EDuration="9.567745844s" podCreationTimestamp="2025-12-12 00:20:11 +0000 UTC" firstStartedPulling="2025-12-12 00:20:13.512046459 +0000 UTC m=+848.289847272" lastFinishedPulling="2025-12-12 00:20:19.229220165 +0000 UTC m=+854.007020978" observedRunningTime="2025-12-12 00:20:20.566185252 +0000 UTC m=+855.343986075" watchObservedRunningTime="2025-12-12 00:20:20.567745844 +0000 UTC m=+855.345546677" Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.752187 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn"] Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.753956 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.762106 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn"] Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.785351 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frx4s\" (UniqueName: \"kubernetes.io/projected/b3d557ce-f222-460b-96a9-9b7e330f9b82-kube-api-access-frx4s\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.785488 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.785538 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.886271 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.886358 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frx4s\" (UniqueName: \"kubernetes.io/projected/b3d557ce-f222-460b-96a9-9b7e330f9b82-kube-api-access-frx4s\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.886391 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.886937 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.887166 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:20 crc kubenswrapper[4917]: I1212 00:20:20.916723 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frx4s\" (UniqueName: \"kubernetes.io/projected/b3d557ce-f222-460b-96a9-9b7e330f9b82-kube-api-access-frx4s\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.079412 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.258305 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn"] Dec 12 00:20:21 crc kubenswrapper[4917]: W1212 00:20:21.265304 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3d557ce_f222_460b_96a9_9b7e330f9b82.slice/crio-72d4bdc8dc5e444eba427d789ec7363c378340c5e8a2661e47a5daecf95f14e1 WatchSource:0}: Error finding container 72d4bdc8dc5e444eba427d789ec7363c378340c5e8a2661e47a5daecf95f14e1: Status 404 returned error can't find the container with id 72d4bdc8dc5e444eba427d789ec7363c378340c5e8a2661e47a5daecf95f14e1 Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.559302 4917 generic.go:334] "Generic (PLEG): container finished" podID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerID="d8c48c5737369a9008f19ac700e83a6baa275002a614ab3cc901b6efde78fe24" exitCode=0 Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.559372 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tshmq" event={"ID":"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4","Type":"ContainerDied","Data":"d8c48c5737369a9008f19ac700e83a6baa275002a614ab3cc901b6efde78fe24"} Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.561619 4917 generic.go:334] "Generic (PLEG): container finished" podID="11f5a0c1-fe52-4822-a95b-64e89e66d3c4" containerID="f68e52a90f77b3668587876555e65ebec46375358ed0b2100141e34dc1d4dadc" exitCode=0 Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.561677 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" event={"ID":"11f5a0c1-fe52-4822-a95b-64e89e66d3c4","Type":"ContainerDied","Data":"f68e52a90f77b3668587876555e65ebec46375358ed0b2100141e34dc1d4dadc"} Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.562049 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82"] Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.563910 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" event={"ID":"b3d557ce-f222-460b-96a9-9b7e330f9b82","Type":"ContainerStarted","Data":"72d4bdc8dc5e444eba427d789ec7363c378340c5e8a2661e47a5daecf95f14e1"} Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.564134 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.578438 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82"] Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.698857 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jbrf\" (UniqueName: \"kubernetes.io/projected/eb199435-b885-4a1b-bff7-cd9f113dfe70-kube-api-access-7jbrf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.699200 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.699252 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.800042 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.800110 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.800157 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jbrf\" (UniqueName: \"kubernetes.io/projected/eb199435-b885-4a1b-bff7-cd9f113dfe70-kube-api-access-7jbrf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.800511 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.800551 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.823756 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jbrf\" (UniqueName: \"kubernetes.io/projected/eb199435-b885-4a1b-bff7-cd9f113dfe70-kube-api-access-7jbrf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:21 crc kubenswrapper[4917]: I1212 00:20:21.894574 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:22 crc kubenswrapper[4917]: I1212 00:20:22.084956 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82"] Dec 12 00:20:22 crc kubenswrapper[4917]: W1212 00:20:22.090781 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb199435_b885_4a1b_bff7_cd9f113dfe70.slice/crio-f34a42b29d4320863b4094655e54b535cc98f68a81a629e6661122c6d276ee69 WatchSource:0}: Error finding container f34a42b29d4320863b4094655e54b535cc98f68a81a629e6661122c6d276ee69: Status 404 returned error can't find the container with id f34a42b29d4320863b4094655e54b535cc98f68a81a629e6661122c6d276ee69 Dec 12 00:20:22 crc kubenswrapper[4917]: I1212 00:20:22.644587 4917 generic.go:334] "Generic (PLEG): container finished" podID="b3d557ce-f222-460b-96a9-9b7e330f9b82" containerID="1596d5ccfdf8959f2a15ff6f2172cb48c2ae184cb515978f00de9a8822b699aa" exitCode=0 Dec 12 00:20:22 crc kubenswrapper[4917]: I1212 00:20:22.644679 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" event={"ID":"b3d557ce-f222-460b-96a9-9b7e330f9b82","Type":"ContainerDied","Data":"1596d5ccfdf8959f2a15ff6f2172cb48c2ae184cb515978f00de9a8822b699aa"} Dec 12 00:20:22 crc kubenswrapper[4917]: I1212 00:20:22.646145 4917 generic.go:334] "Generic (PLEG): container finished" podID="eb199435-b885-4a1b-bff7-cd9f113dfe70" containerID="3e4b83c510fcb4161c34809fede5a30190916b6da77f3a233327afed15716746" exitCode=0 Dec 12 00:20:22 crc kubenswrapper[4917]: I1212 00:20:22.646226 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" event={"ID":"eb199435-b885-4a1b-bff7-cd9f113dfe70","Type":"ContainerDied","Data":"3e4b83c510fcb4161c34809fede5a30190916b6da77f3a233327afed15716746"} Dec 12 00:20:22 crc kubenswrapper[4917]: I1212 00:20:22.646312 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" event={"ID":"eb199435-b885-4a1b-bff7-cd9f113dfe70","Type":"ContainerStarted","Data":"f34a42b29d4320863b4094655e54b535cc98f68a81a629e6661122c6d276ee69"} Dec 12 00:20:22 crc kubenswrapper[4917]: I1212 00:20:22.913230 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.018061 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-util\") pod \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.018172 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-bundle\") pod \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.018265 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8jd4\" (UniqueName: \"kubernetes.io/projected/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-kube-api-access-b8jd4\") pod \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\" (UID: \"11f5a0c1-fe52-4822-a95b-64e89e66d3c4\") " Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.020755 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-bundle" (OuterVolumeSpecName: "bundle") pod "11f5a0c1-fe52-4822-a95b-64e89e66d3c4" (UID: "11f5a0c1-fe52-4822-a95b-64e89e66d3c4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.024082 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-kube-api-access-b8jd4" (OuterVolumeSpecName: "kube-api-access-b8jd4") pod "11f5a0c1-fe52-4822-a95b-64e89e66d3c4" (UID: "11f5a0c1-fe52-4822-a95b-64e89e66d3c4"). InnerVolumeSpecName "kube-api-access-b8jd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.031408 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-util" (OuterVolumeSpecName: "util") pod "11f5a0c1-fe52-4822-a95b-64e89e66d3c4" (UID: "11f5a0c1-fe52-4822-a95b-64e89e66d3c4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.119466 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8jd4\" (UniqueName: \"kubernetes.io/projected/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-kube-api-access-b8jd4\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.119892 4917 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-util\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.119904 4917 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11f5a0c1-fe52-4822-a95b-64e89e66d3c4-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.658561 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tshmq" event={"ID":"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4","Type":"ContainerStarted","Data":"602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d"} Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.667531 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" event={"ID":"11f5a0c1-fe52-4822-a95b-64e89e66d3c4","Type":"ContainerDied","Data":"63f2ee86163593b7524080bb306bb2a8ca37c8c45885a6a9e611c2ea6d0015ce"} Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.667576 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f2ee86163593b7524080bb306bb2a8ca37c8c45885a6a9e611c2ea6d0015ce" Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.667579 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj" Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.679055 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tshmq" podStartSLOduration=3.334428357 podStartE2EDuration="10.679037614s" podCreationTimestamp="2025-12-12 00:20:13 +0000 UTC" firstStartedPulling="2025-12-12 00:20:15.518725007 +0000 UTC m=+850.296525820" lastFinishedPulling="2025-12-12 00:20:22.863334264 +0000 UTC m=+857.641135077" observedRunningTime="2025-12-12 00:20:23.676202228 +0000 UTC m=+858.454003051" watchObservedRunningTime="2025-12-12 00:20:23.679037614 +0000 UTC m=+858.456838437" Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.829906 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:23 crc kubenswrapper[4917]: I1212 00:20:23.830718 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:24 crc kubenswrapper[4917]: I1212 00:20:24.674882 4917 generic.go:334] "Generic (PLEG): container finished" podID="b3d557ce-f222-460b-96a9-9b7e330f9b82" containerID="ff6a55730df02a1a1614a6fe9957c72ec6ad04cba207a8567a0cb1e1aa868be6" exitCode=0 Dec 12 00:20:24 crc kubenswrapper[4917]: I1212 00:20:24.674947 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" event={"ID":"b3d557ce-f222-460b-96a9-9b7e330f9b82","Type":"ContainerDied","Data":"ff6a55730df02a1a1614a6fe9957c72ec6ad04cba207a8567a0cb1e1aa868be6"} Dec 12 00:20:24 crc kubenswrapper[4917]: I1212 00:20:24.678801 4917 generic.go:334] "Generic (PLEG): container finished" podID="eb199435-b885-4a1b-bff7-cd9f113dfe70" containerID="dce604d83f35e0d40705442ed4cdcd0e37e768aa737ec5b3f35bf18fbe928236" exitCode=0 Dec 12 00:20:24 crc kubenswrapper[4917]: I1212 00:20:24.678929 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" event={"ID":"eb199435-b885-4a1b-bff7-cd9f113dfe70","Type":"ContainerDied","Data":"dce604d83f35e0d40705442ed4cdcd0e37e768aa737ec5b3f35bf18fbe928236"} Dec 12 00:20:24 crc kubenswrapper[4917]: I1212 00:20:24.894110 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tshmq" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="registry-server" probeResult="failure" output=< Dec 12 00:20:24 crc kubenswrapper[4917]: timeout: failed to connect service ":50051" within 1s Dec 12 00:20:24 crc kubenswrapper[4917]: > Dec 12 00:20:25 crc kubenswrapper[4917]: I1212 00:20:25.696618 4917 generic.go:334] "Generic (PLEG): container finished" podID="eb199435-b885-4a1b-bff7-cd9f113dfe70" containerID="2b25855156c6c944bfa4002e8ee30635928b23847d0cb681e44a0cbf965b14b0" exitCode=0 Dec 12 00:20:25 crc kubenswrapper[4917]: I1212 00:20:25.696692 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" event={"ID":"eb199435-b885-4a1b-bff7-cd9f113dfe70","Type":"ContainerDied","Data":"2b25855156c6c944bfa4002e8ee30635928b23847d0cb681e44a0cbf965b14b0"} Dec 12 00:20:25 crc kubenswrapper[4917]: I1212 00:20:25.702891 4917 generic.go:334] "Generic (PLEG): container finished" podID="b3d557ce-f222-460b-96a9-9b7e330f9b82" containerID="9cae01d7149f685a8fc2d6d375b545e804ef8b6db02646e8e2a2e20b19ff62bf" exitCode=0 Dec 12 00:20:25 crc kubenswrapper[4917]: I1212 00:20:25.703502 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" event={"ID":"b3d557ce-f222-460b-96a9-9b7e330f9b82","Type":"ContainerDied","Data":"9cae01d7149f685a8fc2d6d375b545e804ef8b6db02646e8e2a2e20b19ff62bf"} Dec 12 00:20:25 crc kubenswrapper[4917]: I1212 00:20:25.942517 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w"] Dec 12 00:20:25 crc kubenswrapper[4917]: E1212 00:20:25.943245 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f5a0c1-fe52-4822-a95b-64e89e66d3c4" containerName="extract" Dec 12 00:20:25 crc kubenswrapper[4917]: I1212 00:20:25.943266 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f5a0c1-fe52-4822-a95b-64e89e66d3c4" containerName="extract" Dec 12 00:20:25 crc kubenswrapper[4917]: E1212 00:20:25.943296 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f5a0c1-fe52-4822-a95b-64e89e66d3c4" containerName="util" Dec 12 00:20:25 crc kubenswrapper[4917]: I1212 00:20:25.943306 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f5a0c1-fe52-4822-a95b-64e89e66d3c4" containerName="util" Dec 12 00:20:25 crc kubenswrapper[4917]: E1212 00:20:25.943325 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f5a0c1-fe52-4822-a95b-64e89e66d3c4" containerName="pull" Dec 12 00:20:25 crc kubenswrapper[4917]: I1212 00:20:25.943335 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f5a0c1-fe52-4822-a95b-64e89e66d3c4" containerName="pull" Dec 12 00:20:25 crc kubenswrapper[4917]: I1212 00:20:25.943480 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f5a0c1-fe52-4822-a95b-64e89e66d3c4" containerName="extract" Dec 12 00:20:25 crc kubenswrapper[4917]: I1212 00:20:25.944470 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:20:25 crc kubenswrapper[4917]: I1212 00:20:25.954316 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w"] Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.063084 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6gk\" (UniqueName: \"kubernetes.io/projected/277c47e2-03cd-4ac4-9125-3379f89dc58c-kube-api-access-5w6gk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.063130 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.063269 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.164188 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.164242 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6gk\" (UniqueName: \"kubernetes.io/projected/277c47e2-03cd-4ac4-9125-3379f89dc58c-kube-api-access-5w6gk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.164264 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.164884 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.164895 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.189702 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6gk\" (UniqueName: \"kubernetes.io/projected/277c47e2-03cd-4ac4-9125-3379f89dc58c-kube-api-access-5w6gk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.275518 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.510947 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b4zhp"] Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.512100 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.522346 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4zhp"] Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.569369 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-utilities\") pod \"certified-operators-b4zhp\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.569765 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb9wv\" (UniqueName: \"kubernetes.io/projected/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-kube-api-access-fb9wv\") pod \"certified-operators-b4zhp\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.569805 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-catalog-content\") pod \"certified-operators-b4zhp\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.671403 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-utilities\") pod \"certified-operators-b4zhp\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.671526 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb9wv\" (UniqueName: \"kubernetes.io/projected/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-kube-api-access-fb9wv\") pod \"certified-operators-b4zhp\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.671583 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-catalog-content\") pod \"certified-operators-b4zhp\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.671863 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-utilities\") pod \"certified-operators-b4zhp\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.672517 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-catalog-content\") pod \"certified-operators-b4zhp\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.687335 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w"] Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.707581 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb9wv\" (UniqueName: \"kubernetes.io/projected/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-kube-api-access-fb9wv\") pod \"certified-operators-b4zhp\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.714834 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" event={"ID":"277c47e2-03cd-4ac4-9125-3379f89dc58c","Type":"ContainerStarted","Data":"6f2ada049e956f945299c05a7b628890309fe8f4228cfdf6cd32804b950955af"} Dec 12 00:20:26 crc kubenswrapper[4917]: I1212 00:20:26.838679 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.561809 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.703823 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frx4s\" (UniqueName: \"kubernetes.io/projected/b3d557ce-f222-460b-96a9-9b7e330f9b82-kube-api-access-frx4s\") pod \"b3d557ce-f222-460b-96a9-9b7e330f9b82\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.704021 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-util\") pod \"b3d557ce-f222-460b-96a9-9b7e330f9b82\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.704071 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-bundle\") pod \"b3d557ce-f222-460b-96a9-9b7e330f9b82\" (UID: \"b3d557ce-f222-460b-96a9-9b7e330f9b82\") " Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.705016 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-bundle" (OuterVolumeSpecName: "bundle") pod "b3d557ce-f222-460b-96a9-9b7e330f9b82" (UID: "b3d557ce-f222-460b-96a9-9b7e330f9b82"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.708495 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d557ce-f222-460b-96a9-9b7e330f9b82-kube-api-access-frx4s" (OuterVolumeSpecName: "kube-api-access-frx4s") pod "b3d557ce-f222-460b-96a9-9b7e330f9b82" (UID: "b3d557ce-f222-460b-96a9-9b7e330f9b82"). InnerVolumeSpecName "kube-api-access-frx4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.709186 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.730834 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" event={"ID":"eb199435-b885-4a1b-bff7-cd9f113dfe70","Type":"ContainerDied","Data":"f34a42b29d4320863b4094655e54b535cc98f68a81a629e6661122c6d276ee69"} Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.730869 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.730876 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f34a42b29d4320863b4094655e54b535cc98f68a81a629e6661122c6d276ee69" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.733298 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" event={"ID":"b3d557ce-f222-460b-96a9-9b7e330f9b82","Type":"ContainerDied","Data":"72d4bdc8dc5e444eba427d789ec7363c378340c5e8a2661e47a5daecf95f14e1"} Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.733335 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72d4bdc8dc5e444eba427d789ec7363c378340c5e8a2661e47a5daecf95f14e1" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.733409 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.806495 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-bundle\") pod \"eb199435-b885-4a1b-bff7-cd9f113dfe70\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.806561 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jbrf\" (UniqueName: \"kubernetes.io/projected/eb199435-b885-4a1b-bff7-cd9f113dfe70-kube-api-access-7jbrf\") pod \"eb199435-b885-4a1b-bff7-cd9f113dfe70\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.806583 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-util\") pod \"eb199435-b885-4a1b-bff7-cd9f113dfe70\" (UID: \"eb199435-b885-4a1b-bff7-cd9f113dfe70\") " Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.806827 4917 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.806843 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frx4s\" (UniqueName: \"kubernetes.io/projected/b3d557ce-f222-460b-96a9-9b7e330f9b82-kube-api-access-frx4s\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.807600 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-bundle" (OuterVolumeSpecName: "bundle") pod "eb199435-b885-4a1b-bff7-cd9f113dfe70" (UID: "eb199435-b885-4a1b-bff7-cd9f113dfe70"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.810230 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb199435-b885-4a1b-bff7-cd9f113dfe70-kube-api-access-7jbrf" (OuterVolumeSpecName: "kube-api-access-7jbrf") pod "eb199435-b885-4a1b-bff7-cd9f113dfe70" (UID: "eb199435-b885-4a1b-bff7-cd9f113dfe70"). InnerVolumeSpecName "kube-api-access-7jbrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.894819 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4zhp"] Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.908528 4917 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:27 crc kubenswrapper[4917]: I1212 00:20:27.908565 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jbrf\" (UniqueName: \"kubernetes.io/projected/eb199435-b885-4a1b-bff7-cd9f113dfe70-kube-api-access-7jbrf\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:28 crc kubenswrapper[4917]: I1212 00:20:28.739369 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zhp" event={"ID":"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95","Type":"ContainerStarted","Data":"effa8be4dfcc4666576b248a27b6788f56d51ec29b357810bd3be745acfbdd9e"} Dec 12 00:20:29 crc kubenswrapper[4917]: I1212 00:20:29.966325 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-util" (OuterVolumeSpecName: "util") pod "b3d557ce-f222-460b-96a9-9b7e330f9b82" (UID: "b3d557ce-f222-460b-96a9-9b7e330f9b82"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:20:30 crc kubenswrapper[4917]: I1212 00:20:30.009046 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-util" (OuterVolumeSpecName: "util") pod "eb199435-b885-4a1b-bff7-cd9f113dfe70" (UID: "eb199435-b885-4a1b-bff7-cd9f113dfe70"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:20:30 crc kubenswrapper[4917]: I1212 00:20:30.021358 4917 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb199435-b885-4a1b-bff7-cd9f113dfe70-util\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:30 crc kubenswrapper[4917]: I1212 00:20:30.021401 4917 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3d557ce-f222-460b-96a9-9b7e330f9b82-util\") on node \"crc\" DevicePath \"\"" Dec 12 00:20:31 crc kubenswrapper[4917]: I1212 00:20:31.903332 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" event={"ID":"277c47e2-03cd-4ac4-9125-3379f89dc58c","Type":"ContainerStarted","Data":"a6f76564fb9dd8054ac97c0d9c9a7f3ecf703db74afd134b90f50e9e8e463959"} Dec 12 00:20:32 crc kubenswrapper[4917]: I1212 00:20:32.954207 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zhp" event={"ID":"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95","Type":"ContainerStarted","Data":"0c4a36305bff8369aa38f56bbd86cca602037f877cbbf3848d443c58606c81c1"} Dec 12 00:20:34 crc kubenswrapper[4917]: I1212 00:20:34.005200 4917 generic.go:334] "Generic (PLEG): container finished" podID="277c47e2-03cd-4ac4-9125-3379f89dc58c" containerID="a6f76564fb9dd8054ac97c0d9c9a7f3ecf703db74afd134b90f50e9e8e463959" exitCode=0 Dec 12 00:20:34 crc kubenswrapper[4917]: I1212 00:20:34.005312 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" event={"ID":"277c47e2-03cd-4ac4-9125-3379f89dc58c","Type":"ContainerDied","Data":"a6f76564fb9dd8054ac97c0d9c9a7f3ecf703db74afd134b90f50e9e8e463959"} Dec 12 00:20:34 crc kubenswrapper[4917]: I1212 00:20:34.006629 4917 generic.go:334] "Generic (PLEG): container finished" podID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" containerID="0c4a36305bff8369aa38f56bbd86cca602037f877cbbf3848d443c58606c81c1" exitCode=0 Dec 12 00:20:34 crc kubenswrapper[4917]: I1212 00:20:34.006689 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zhp" event={"ID":"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95","Type":"ContainerDied","Data":"0c4a36305bff8369aa38f56bbd86cca602037f877cbbf3848d443c58606c81c1"} Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.000177 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-kwphw"] Dec 12 00:20:35 crc kubenswrapper[4917]: E1212 00:20:35.000748 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb199435-b885-4a1b-bff7-cd9f113dfe70" containerName="pull" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.000838 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb199435-b885-4a1b-bff7-cd9f113dfe70" containerName="pull" Dec 12 00:20:35 crc kubenswrapper[4917]: E1212 00:20:35.000912 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d557ce-f222-460b-96a9-9b7e330f9b82" containerName="extract" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.000993 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d557ce-f222-460b-96a9-9b7e330f9b82" containerName="extract" Dec 12 00:20:35 crc kubenswrapper[4917]: E1212 00:20:35.001080 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d557ce-f222-460b-96a9-9b7e330f9b82" containerName="util" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.001158 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d557ce-f222-460b-96a9-9b7e330f9b82" containerName="util" Dec 12 00:20:35 crc kubenswrapper[4917]: E1212 00:20:35.001236 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb199435-b885-4a1b-bff7-cd9f113dfe70" containerName="util" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.001301 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb199435-b885-4a1b-bff7-cd9f113dfe70" containerName="util" Dec 12 00:20:35 crc kubenswrapper[4917]: E1212 00:20:35.001372 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb199435-b885-4a1b-bff7-cd9f113dfe70" containerName="extract" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.001447 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb199435-b885-4a1b-bff7-cd9f113dfe70" containerName="extract" Dec 12 00:20:35 crc kubenswrapper[4917]: E1212 00:20:35.001530 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d557ce-f222-460b-96a9-9b7e330f9b82" containerName="pull" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.001597 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d557ce-f222-460b-96a9-9b7e330f9b82" containerName="pull" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.001820 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb199435-b885-4a1b-bff7-cd9f113dfe70" containerName="extract" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.001911 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d557ce-f222-460b-96a9-9b7e330f9b82" containerName="extract" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.002444 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-kwphw" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.006917 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.008971 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-pj8c7" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.008997 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.015786 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-kwphw"] Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.123491 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tshmq" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="registry-server" probeResult="failure" output=< Dec 12 00:20:35 crc kubenswrapper[4917]: timeout: failed to connect service ":50051" within 1s Dec 12 00:20:35 crc kubenswrapper[4917]: > Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.159463 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c2pb\" (UniqueName: \"kubernetes.io/projected/fedd1280-db6c-42a9-b725-7887bb70e09e-kube-api-access-8c2pb\") pod \"interconnect-operator-5bb49f789d-kwphw\" (UID: \"fedd1280-db6c-42a9-b725-7887bb70e09e\") " pod="service-telemetry/interconnect-operator-5bb49f789d-kwphw" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.260903 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c2pb\" (UniqueName: \"kubernetes.io/projected/fedd1280-db6c-42a9-b725-7887bb70e09e-kube-api-access-8c2pb\") pod \"interconnect-operator-5bb49f789d-kwphw\" (UID: \"fedd1280-db6c-42a9-b725-7887bb70e09e\") " pod="service-telemetry/interconnect-operator-5bb49f789d-kwphw" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.304321 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c2pb\" (UniqueName: \"kubernetes.io/projected/fedd1280-db6c-42a9-b725-7887bb70e09e-kube-api-access-8c2pb\") pod \"interconnect-operator-5bb49f789d-kwphw\" (UID: \"fedd1280-db6c-42a9-b725-7887bb70e09e\") " pod="service-telemetry/interconnect-operator-5bb49f789d-kwphw" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.361966 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-kwphw" Dec 12 00:20:35 crc kubenswrapper[4917]: I1212 00:20:35.705131 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-kwphw"] Dec 12 00:20:36 crc kubenswrapper[4917]: I1212 00:20:36.016590 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-kwphw" event={"ID":"fedd1280-db6c-42a9-b725-7887bb70e09e","Type":"ContainerStarted","Data":"578e81f7a0c7da9a3c1b964dc0bffd9d8180f8ce1ac5641470052249e77553cd"} Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.708068 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-jz8h4"] Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.709056 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jz8h4" Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.711287 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.711620 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.711802 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mxg5f" Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.721669 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-jz8h4"] Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.853106 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg6jz\" (UniqueName: \"kubernetes.io/projected/ef2dff2d-d381-4578-8da4-a7e49e767228-kube-api-access-gg6jz\") pod \"obo-prometheus-operator-668cf9dfbb-jz8h4\" (UID: \"ef2dff2d-d381-4578-8da4-a7e49e767228\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jz8h4" Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.871718 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h"] Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.874246 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.877141 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.883097 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-j98dk" Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.891760 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz"] Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.892627 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz" Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.896605 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz"] Dec 12 00:20:37 crc kubenswrapper[4917]: I1212 00:20:37.903769 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h"] Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:37.956415 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg6jz\" (UniqueName: \"kubernetes.io/projected/ef2dff2d-d381-4578-8da4-a7e49e767228-kube-api-access-gg6jz\") pod \"obo-prometheus-operator-668cf9dfbb-jz8h4\" (UID: \"ef2dff2d-d381-4578-8da4-a7e49e767228\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jz8h4" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.045547 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg6jz\" (UniqueName: \"kubernetes.io/projected/ef2dff2d-d381-4578-8da4-a7e49e767228-kube-api-access-gg6jz\") pod \"obo-prometheus-operator-668cf9dfbb-jz8h4\" (UID: \"ef2dff2d-d381-4578-8da4-a7e49e767228\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jz8h4" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.058315 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zhp" event={"ID":"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95","Type":"ContainerStarted","Data":"adb465daa37f85f57b412f7a6f2b09d331eda177413b1ba38bd0e14c54305bd5"} Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.064052 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3192b300-476f-4127-a662-9636f89655c7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h\" (UID: \"3192b300-476f-4127-a662-9636f89655c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.064150 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3192b300-476f-4127-a662-9636f89655c7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h\" (UID: \"3192b300-476f-4127-a662-9636f89655c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.064235 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e46b2b7-70db-4c02-bde2-45fd66f3f151-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-g22vz\" (UID: \"4e46b2b7-70db-4c02-bde2-45fd66f3f151\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.064270 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e46b2b7-70db-4c02-bde2-45fd66f3f151-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-g22vz\" (UID: \"4e46b2b7-70db-4c02-bde2-45fd66f3f151\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.150145 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jz8h4" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.166148 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e46b2b7-70db-4c02-bde2-45fd66f3f151-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-g22vz\" (UID: \"4e46b2b7-70db-4c02-bde2-45fd66f3f151\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.166202 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e46b2b7-70db-4c02-bde2-45fd66f3f151-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-g22vz\" (UID: \"4e46b2b7-70db-4c02-bde2-45fd66f3f151\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.166235 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3192b300-476f-4127-a662-9636f89655c7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h\" (UID: \"3192b300-476f-4127-a662-9636f89655c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.166322 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3192b300-476f-4127-a662-9636f89655c7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h\" (UID: \"3192b300-476f-4127-a662-9636f89655c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.179447 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3192b300-476f-4127-a662-9636f89655c7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h\" (UID: \"3192b300-476f-4127-a662-9636f89655c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.180126 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3192b300-476f-4127-a662-9636f89655c7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h\" (UID: \"3192b300-476f-4127-a662-9636f89655c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.184601 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-zkjtd"] Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.185506 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.192950 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.200675 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-zkjtd"] Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.215435 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e46b2b7-70db-4c02-bde2-45fd66f3f151-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-g22vz\" (UID: \"4e46b2b7-70db-4c02-bde2-45fd66f3f151\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.267495 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9084360e-9bf8-4c2e-a531-ae62da73050d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-zkjtd\" (UID: \"9084360e-9bf8-4c2e-a531-ae62da73050d\") " pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.267593 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd8ff\" (UniqueName: \"kubernetes.io/projected/9084360e-9bf8-4c2e-a531-ae62da73050d-kube-api-access-zd8ff\") pod \"observability-operator-d8bb48f5d-zkjtd\" (UID: \"9084360e-9bf8-4c2e-a531-ae62da73050d\") " pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.299762 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-6ppl9" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.301635 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.305639 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e46b2b7-70db-4c02-bde2-45fd66f3f151-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-76c76f4448-g22vz\" (UID: \"4e46b2b7-70db-4c02-bde2-45fd66f3f151\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.368870 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd8ff\" (UniqueName: \"kubernetes.io/projected/9084360e-9bf8-4c2e-a531-ae62da73050d-kube-api-access-zd8ff\") pod \"observability-operator-d8bb48f5d-zkjtd\" (UID: \"9084360e-9bf8-4c2e-a531-ae62da73050d\") " pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.369082 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9084360e-9bf8-4c2e-a531-ae62da73050d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-zkjtd\" (UID: \"9084360e-9bf8-4c2e-a531-ae62da73050d\") " pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.421401 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9084360e-9bf8-4c2e-a531-ae62da73050d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-zkjtd\" (UID: \"9084360e-9bf8-4c2e-a531-ae62da73050d\") " pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.447109 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd8ff\" (UniqueName: \"kubernetes.io/projected/9084360e-9bf8-4c2e-a531-ae62da73050d-kube-api-access-zd8ff\") pod \"observability-operator-d8bb48f5d-zkjtd\" (UID: \"9084360e-9bf8-4c2e-a531-ae62da73050d\") " pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.675669 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.676385 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.679848 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-q9zcl"] Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.680532 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-q9zcl" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.683103 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-dgs8t" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.723747 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-q9zcl"] Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.908379 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-8fb9d549d-7c4mf"] Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.909789 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.913795 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-q8pvr" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.925750 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-8fb9d549d-7c4mf"] Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.929141 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0560a2ae-f9f1-489d-97f2-5f51bbb6dff1-apiservice-cert\") pod \"elastic-operator-8fb9d549d-7c4mf\" (UID: \"0560a2ae-f9f1-489d-97f2-5f51bbb6dff1\") " pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.929291 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0560a2ae-f9f1-489d-97f2-5f51bbb6dff1-webhook-cert\") pod \"elastic-operator-8fb9d549d-7c4mf\" (UID: \"0560a2ae-f9f1-489d-97f2-5f51bbb6dff1\") " pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.929355 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d298p\" (UniqueName: \"kubernetes.io/projected/0560a2ae-f9f1-489d-97f2-5f51bbb6dff1-kube-api-access-d298p\") pod \"elastic-operator-8fb9d549d-7c4mf\" (UID: \"0560a2ae-f9f1-489d-97f2-5f51bbb6dff1\") " pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.929421 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbfw6\" (UniqueName: \"kubernetes.io/projected/e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d-kube-api-access-bbfw6\") pod \"perses-operator-5446b9c989-q9zcl\" (UID: \"e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d\") " pod="openshift-operators/perses-operator-5446b9c989-q9zcl" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.929614 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d-openshift-service-ca\") pod \"perses-operator-5446b9c989-q9zcl\" (UID: \"e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d\") " pod="openshift-operators/perses-operator-5446b9c989-q9zcl" Dec 12 00:20:38 crc kubenswrapper[4917]: I1212 00:20:38.959621 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.043364 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0560a2ae-f9f1-489d-97f2-5f51bbb6dff1-apiservice-cert\") pod \"elastic-operator-8fb9d549d-7c4mf\" (UID: \"0560a2ae-f9f1-489d-97f2-5f51bbb6dff1\") " pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.043474 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0560a2ae-f9f1-489d-97f2-5f51bbb6dff1-webhook-cert\") pod \"elastic-operator-8fb9d549d-7c4mf\" (UID: \"0560a2ae-f9f1-489d-97f2-5f51bbb6dff1\") " pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.043508 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d298p\" (UniqueName: \"kubernetes.io/projected/0560a2ae-f9f1-489d-97f2-5f51bbb6dff1-kube-api-access-d298p\") pod \"elastic-operator-8fb9d549d-7c4mf\" (UID: \"0560a2ae-f9f1-489d-97f2-5f51bbb6dff1\") " pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.043554 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbfw6\" (UniqueName: \"kubernetes.io/projected/e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d-kube-api-access-bbfw6\") pod \"perses-operator-5446b9c989-q9zcl\" (UID: \"e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d\") " pod="openshift-operators/perses-operator-5446b9c989-q9zcl" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.043641 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d-openshift-service-ca\") pod \"perses-operator-5446b9c989-q9zcl\" (UID: \"e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d\") " pod="openshift-operators/perses-operator-5446b9c989-q9zcl" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.055089 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0560a2ae-f9f1-489d-97f2-5f51bbb6dff1-apiservice-cert\") pod \"elastic-operator-8fb9d549d-7c4mf\" (UID: \"0560a2ae-f9f1-489d-97f2-5f51bbb6dff1\") " pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.069358 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d-openshift-service-ca\") pod \"perses-operator-5446b9c989-q9zcl\" (UID: \"e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d\") " pod="openshift-operators/perses-operator-5446b9c989-q9zcl" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.083156 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0560a2ae-f9f1-489d-97f2-5f51bbb6dff1-webhook-cert\") pod \"elastic-operator-8fb9d549d-7c4mf\" (UID: \"0560a2ae-f9f1-489d-97f2-5f51bbb6dff1\") " pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.231291 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbfw6\" (UniqueName: \"kubernetes.io/projected/e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d-kube-api-access-bbfw6\") pod \"perses-operator-5446b9c989-q9zcl\" (UID: \"e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d\") " pod="openshift-operators/perses-operator-5446b9c989-q9zcl" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.240200 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d298p\" (UniqueName: \"kubernetes.io/projected/0560a2ae-f9f1-489d-97f2-5f51bbb6dff1-kube-api-access-d298p\") pod \"elastic-operator-8fb9d549d-7c4mf\" (UID: \"0560a2ae-f9f1-489d-97f2-5f51bbb6dff1\") " pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.308547 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-q9zcl" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.317693 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.327538 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h"] Dec 12 00:20:39 crc kubenswrapper[4917]: W1212 00:20:39.380164 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3192b300_476f_4127_a662_9636f89655c7.slice/crio-12d9c93fa2ebb6876d94d5be32675838a9d7379604aeeebfc3e349ee0ecea410 WatchSource:0}: Error finding container 12d9c93fa2ebb6876d94d5be32675838a9d7379604aeeebfc3e349ee0ecea410: Status 404 returned error can't find the container with id 12d9c93fa2ebb6876d94d5be32675838a9d7379604aeeebfc3e349ee0ecea410 Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.617358 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-jz8h4"] Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.735155 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-zkjtd"] Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.754627 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz"] Dec 12 00:20:39 crc kubenswrapper[4917]: I1212 00:20:39.844156 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-q9zcl"] Dec 12 00:20:40 crc kubenswrapper[4917]: I1212 00:20:40.067085 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-8fb9d549d-7c4mf"] Dec 12 00:20:40 crc kubenswrapper[4917]: I1212 00:20:40.243005 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" event={"ID":"3192b300-476f-4127-a662-9636f89655c7","Type":"ContainerStarted","Data":"12d9c93fa2ebb6876d94d5be32675838a9d7379604aeeebfc3e349ee0ecea410"} Dec 12 00:20:40 crc kubenswrapper[4917]: I1212 00:20:40.256569 4917 generic.go:334] "Generic (PLEG): container finished" podID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" containerID="adb465daa37f85f57b412f7a6f2b09d331eda177413b1ba38bd0e14c54305bd5" exitCode=0 Dec 12 00:20:40 crc kubenswrapper[4917]: I1212 00:20:40.256689 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zhp" event={"ID":"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95","Type":"ContainerDied","Data":"adb465daa37f85f57b412f7a6f2b09d331eda177413b1ba38bd0e14c54305bd5"} Dec 12 00:20:44 crc kubenswrapper[4917]: I1212 00:20:44.001722 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:44 crc kubenswrapper[4917]: I1212 00:20:44.066105 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:20:44 crc kubenswrapper[4917]: I1212 00:20:44.288027 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" event={"ID":"0560a2ae-f9f1-489d-97f2-5f51bbb6dff1","Type":"ContainerStarted","Data":"af4f5979a2faef76e20434148b93f9ec2d6cf82ae6ee2ec26f0a03c30fc252d2"} Dec 12 00:20:44 crc kubenswrapper[4917]: I1212 00:20:44.290365 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jz8h4" event={"ID":"ef2dff2d-d381-4578-8da4-a7e49e767228","Type":"ContainerStarted","Data":"b86c528a3f7728ad2c93c0d48078239b24732edd73f6f4821218ff4b3fc35616"} Dec 12 00:20:44 crc kubenswrapper[4917]: I1212 00:20:44.296284 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" event={"ID":"9084360e-9bf8-4c2e-a531-ae62da73050d","Type":"ContainerStarted","Data":"7b8b373557d91e9bec80024007de35cf955866ff6c2264028b215d9bc6dfcf8b"} Dec 12 00:20:44 crc kubenswrapper[4917]: I1212 00:20:44.309050 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-q9zcl" event={"ID":"e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d","Type":"ContainerStarted","Data":"3ece307f3917c3cec8cfd2210d976dc4d42a8cd46598f295cb546c7ebb3fa781"} Dec 12 00:20:44 crc kubenswrapper[4917]: I1212 00:20:44.313380 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz" event={"ID":"4e46b2b7-70db-4c02-bde2-45fd66f3f151","Type":"ContainerStarted","Data":"bdcefa270033ce85bea132dafed93f0e054eb2b31bd2de33e285c7b8e7d2b9ff"} Dec 12 00:20:46 crc kubenswrapper[4917]: I1212 00:20:46.904692 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tshmq"] Dec 12 00:20:46 crc kubenswrapper[4917]: I1212 00:20:46.905243 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tshmq" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="registry-server" containerID="cri-o://602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d" gracePeriod=2 Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:52.931912 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7gjwt"] Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:52.933959 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:52.945924 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gjwt"] Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.003169 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt4wn\" (UniqueName: \"kubernetes.io/projected/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-kube-api-access-pt4wn\") pod \"community-operators-7gjwt\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.003250 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-catalog-content\") pod \"community-operators-7gjwt\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.003282 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-utilities\") pod \"community-operators-7gjwt\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.104082 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-utilities\") pod \"community-operators-7gjwt\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.104145 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt4wn\" (UniqueName: \"kubernetes.io/projected/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-kube-api-access-pt4wn\") pod \"community-operators-7gjwt\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.104189 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-catalog-content\") pod \"community-operators-7gjwt\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.104767 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-utilities\") pod \"community-operators-7gjwt\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.104842 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-catalog-content\") pod \"community-operators-7gjwt\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.128048 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt4wn\" (UniqueName: \"kubernetes.io/projected/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-kube-api-access-pt4wn\") pod \"community-operators-7gjwt\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.255701 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.499917 4917 generic.go:334] "Generic (PLEG): container finished" podID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerID="602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d" exitCode=0 Dec 12 00:20:53 crc kubenswrapper[4917]: I1212 00:20:53.499969 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tshmq" event={"ID":"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4","Type":"ContainerDied","Data":"602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d"} Dec 12 00:20:53 crc kubenswrapper[4917]: E1212 00:20:53.830863 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d is running failed: container process not found" containerID="602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:20:53 crc kubenswrapper[4917]: E1212 00:20:53.831255 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d is running failed: container process not found" containerID="602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:20:53 crc kubenswrapper[4917]: E1212 00:20:53.831559 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d is running failed: container process not found" containerID="602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:20:53 crc kubenswrapper[4917]: E1212 00:20:53.831588 4917 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tshmq" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="registry-server" Dec 12 00:20:59 crc kubenswrapper[4917]: E1212 00:20:59.436586 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43" Dec 12 00:20:59 crc kubenswrapper[4917]: E1212 00:20:59.437389 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:interconnect-operator,Image:registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43,Command:[qdr-operator],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:qdr-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_QDROUTERD_IMAGE,Value:registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:amq7-interconnect-operator.v1.10.20,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8c2pb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod interconnect-operator-5bb49f789d-kwphw_service-telemetry(fedd1280-db6c-42a9-b725-7887bb70e09e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:20:59 crc kubenswrapper[4917]: E1212 00:20:59.438801 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/interconnect-operator-5bb49f789d-kwphw" podUID="fedd1280-db6c-42a9-b725-7887bb70e09e" Dec 12 00:20:59 crc kubenswrapper[4917]: E1212 00:20:59.568857 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43\\\"\"" pod="service-telemetry/interconnect-operator-5bb49f789d-kwphw" podUID="fedd1280-db6c-42a9-b725-7887bb70e09e" Dec 12 00:21:03 crc kubenswrapper[4917]: E1212 00:21:03.832375 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d is running failed: container process not found" containerID="602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:21:03 crc kubenswrapper[4917]: E1212 00:21:03.833161 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d is running failed: container process not found" containerID="602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:21:03 crc kubenswrapper[4917]: E1212 00:21:03.833613 4917 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d is running failed: container process not found" containerID="602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 00:21:03 crc kubenswrapper[4917]: E1212 00:21:03.833935 4917 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tshmq" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="registry-server" Dec 12 00:21:07 crc kubenswrapper[4917]: I1212 00:21:07.831485 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:21:07 crc kubenswrapper[4917]: I1212 00:21:07.900021 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-catalog-content\") pod \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " Dec 12 00:21:07 crc kubenswrapper[4917]: I1212 00:21:07.900586 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wkkw\" (UniqueName: \"kubernetes.io/projected/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-kube-api-access-6wkkw\") pod \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " Dec 12 00:21:07 crc kubenswrapper[4917]: I1212 00:21:07.901491 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-utilities\") pod \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\" (UID: \"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4\") " Dec 12 00:21:07 crc kubenswrapper[4917]: I1212 00:21:07.902261 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-utilities" (OuterVolumeSpecName: "utilities") pod "2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" (UID: "2e5fe1c9-2bf3-41ac-8e26-166dca5770d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:21:07 crc kubenswrapper[4917]: I1212 00:21:07.902743 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:07 crc kubenswrapper[4917]: I1212 00:21:07.927991 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-kube-api-access-6wkkw" (OuterVolumeSpecName: "kube-api-access-6wkkw") pod "2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" (UID: "2e5fe1c9-2bf3-41ac-8e26-166dca5770d4"). InnerVolumeSpecName "kube-api-access-6wkkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:21:08 crc kubenswrapper[4917]: I1212 00:21:08.004069 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wkkw\" (UniqueName: \"kubernetes.io/projected/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-kube-api-access-6wkkw\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:08 crc kubenswrapper[4917]: I1212 00:21:08.034528 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" (UID: "2e5fe1c9-2bf3-41ac-8e26-166dca5770d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:21:08 crc kubenswrapper[4917]: I1212 00:21:08.105886 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:08 crc kubenswrapper[4917]: E1212 00:21:08.423713 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 12 00:21:08 crc kubenswrapper[4917]: E1212 00:21:08.423923 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h_openshift-operators(3192b300-476f-4127-a662-9636f89655c7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:21:08 crc kubenswrapper[4917]: E1212 00:21:08.425394 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" podUID="3192b300-476f-4127-a662-9636f89655c7" Dec 12 00:21:08 crc kubenswrapper[4917]: I1212 00:21:08.660978 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tshmq" Dec 12 00:21:08 crc kubenswrapper[4917]: I1212 00:21:08.661314 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tshmq" event={"ID":"2e5fe1c9-2bf3-41ac-8e26-166dca5770d4","Type":"ContainerDied","Data":"d29ee9866863dea7da1a13dda7ba42fd6981da36d84f1407a23895c4a366ad71"} Dec 12 00:21:08 crc kubenswrapper[4917]: I1212 00:21:08.661357 4917 scope.go:117] "RemoveContainer" containerID="602df8b266eb154c25c97cb76b6cf559766db01241b93de679851930f45fde0d" Dec 12 00:21:08 crc kubenswrapper[4917]: E1212 00:21:08.662316 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" podUID="3192b300-476f-4127-a662-9636f89655c7" Dec 12 00:21:08 crc kubenswrapper[4917]: I1212 00:21:08.723095 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tshmq"] Dec 12 00:21:08 crc kubenswrapper[4917]: I1212 00:21:08.727064 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tshmq"] Dec 12 00:21:09 crc kubenswrapper[4917]: I1212 00:21:09.616143 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" path="/var/lib/kubelet/pods/2e5fe1c9-2bf3-41ac-8e26-166dca5770d4/volumes" Dec 12 00:21:10 crc kubenswrapper[4917]: I1212 00:21:10.422166 4917 scope.go:117] "RemoveContainer" containerID="d8c48c5737369a9008f19ac700e83a6baa275002a614ab3cc901b6efde78fe24" Dec 12 00:21:12 crc kubenswrapper[4917]: I1212 00:21:12.965984 4917 scope.go:117] "RemoveContainer" containerID="8f25aeaf995062e654bee5ffe100cb1d2370fc4ca672fde0f661b072dc8a62ec" Dec 12 00:21:13 crc kubenswrapper[4917]: I1212 00:21:13.435783 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gjwt"] Dec 12 00:21:13 crc kubenswrapper[4917]: W1212 00:21:13.444543 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbe4b516_aa59_4d7a_9fbf_cf60e9627711.slice/crio-099c3786fdda5529412445234ff458f3fa665d6bcf96d921c4cd83133d0b762e WatchSource:0}: Error finding container 099c3786fdda5529412445234ff458f3fa665d6bcf96d921c4cd83133d0b762e: Status 404 returned error can't find the container with id 099c3786fdda5529412445234ff458f3fa665d6bcf96d921c4cd83133d0b762e Dec 12 00:21:13 crc kubenswrapper[4917]: I1212 00:21:13.696527 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gjwt" event={"ID":"dbe4b516-aa59-4d7a-9fbf-cf60e9627711","Type":"ContainerStarted","Data":"099c3786fdda5529412445234ff458f3fa665d6bcf96d921c4cd83133d0b762e"} Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.729427 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" event={"ID":"9084360e-9bf8-4c2e-a531-ae62da73050d","Type":"ContainerStarted","Data":"ea604c9804620ce85d25e34902671bcff61fe31e46d8d621b5032dc8099f4649"} Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.730911 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.731245 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jz8h4" event={"ID":"ef2dff2d-d381-4578-8da4-a7e49e767228","Type":"ContainerStarted","Data":"bf98199c971b9808b3c378115c7a975ad9d4521b48f788468ca6f229a584d8c5"} Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.735524 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zhp" event={"ID":"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95","Type":"ContainerStarted","Data":"550ecd3f581933a30c2d0ea06841d25d72e7243cff95e4a2d8160ee05bee4400"} Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.737156 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-kwphw" event={"ID":"fedd1280-db6c-42a9-b725-7887bb70e09e","Type":"ContainerStarted","Data":"61ac7cedb2debfaea6d2718996418797fa20f1d097e8593af402a5ee3975ed6c"} Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.739181 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-q9zcl" event={"ID":"e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d","Type":"ContainerStarted","Data":"b01499ec629516546e7e5a8d7dbdf3d6338c75d38480e1b1f4ff6e7a29063f39"} Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.739870 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-q9zcl" Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.741569 4917 generic.go:334] "Generic (PLEG): container finished" podID="277c47e2-03cd-4ac4-9125-3379f89dc58c" containerID="9da9778a1b1a03b732a1b970c7a19471bf05a52966bdd05c14cb81dced645e16" exitCode=0 Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.741624 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" event={"ID":"277c47e2-03cd-4ac4-9125-3379f89dc58c","Type":"ContainerDied","Data":"9da9778a1b1a03b732a1b970c7a19471bf05a52966bdd05c14cb81dced645e16"} Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.744277 4917 generic.go:334] "Generic (PLEG): container finished" podID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerID="b66ff431b8c2228c0a0bd88a1a4110f07b2793677449a9e43a5848e3fcd0c31d" exitCode=0 Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.744370 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gjwt" event={"ID":"dbe4b516-aa59-4d7a-9fbf-cf60e9627711","Type":"ContainerDied","Data":"b66ff431b8c2228c0a0bd88a1a4110f07b2793677449a9e43a5848e3fcd0c31d"} Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.750010 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" event={"ID":"0560a2ae-f9f1-489d-97f2-5f51bbb6dff1","Type":"ContainerStarted","Data":"6b5e7acc672dcc78a58056c31ec2e3b21fe6c66a9ee59a2d662d4f265750f07d"} Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.752848 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz" event={"ID":"4e46b2b7-70db-4c02-bde2-45fd66f3f151","Type":"ContainerStarted","Data":"69602c42568c4e1d92767d25cd72c274c816ceb64a065cbb17e054c861108e19"} Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.759524 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" podStartSLOduration=11.217943102 podStartE2EDuration="40.759509228s" podCreationTimestamp="2025-12-12 00:20:38 +0000 UTC" firstStartedPulling="2025-12-12 00:20:43.570117086 +0000 UTC m=+878.347917899" lastFinishedPulling="2025-12-12 00:21:13.111683202 +0000 UTC m=+907.889484025" observedRunningTime="2025-12-12 00:21:18.754426142 +0000 UTC m=+913.532226975" watchObservedRunningTime="2025-12-12 00:21:18.759509228 +0000 UTC m=+913.537310041" Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.797928 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-zkjtd" Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.878805 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-g22vz" podStartSLOduration=14.99605268 podStartE2EDuration="41.878783669s" podCreationTimestamp="2025-12-12 00:20:37 +0000 UTC" firstStartedPulling="2025-12-12 00:20:43.541861228 +0000 UTC m=+878.319662041" lastFinishedPulling="2025-12-12 00:21:10.424592217 +0000 UTC m=+905.202393030" observedRunningTime="2025-12-12 00:21:18.87471536 +0000 UTC m=+913.652516183" watchObservedRunningTime="2025-12-12 00:21:18.878783669 +0000 UTC m=+913.656584502" Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.936027 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-kwphw" podStartSLOduration=7.539410683 podStartE2EDuration="44.936003645s" podCreationTimestamp="2025-12-12 00:20:34 +0000 UTC" firstStartedPulling="2025-12-12 00:20:35.714489235 +0000 UTC m=+870.492290038" lastFinishedPulling="2025-12-12 00:21:13.111082187 +0000 UTC m=+907.888883000" observedRunningTime="2025-12-12 00:21:18.933499907 +0000 UTC m=+913.711300730" watchObservedRunningTime="2025-12-12 00:21:18.936003645 +0000 UTC m=+913.713804458" Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.955799 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-jz8h4" podStartSLOduration=12.572205178 podStartE2EDuration="41.955778145s" podCreationTimestamp="2025-12-12 00:20:37 +0000 UTC" firstStartedPulling="2025-12-12 00:20:43.541873118 +0000 UTC m=+878.319673931" lastFinishedPulling="2025-12-12 00:21:12.925446085 +0000 UTC m=+907.703246898" observedRunningTime="2025-12-12 00:21:18.953083103 +0000 UTC m=+913.730883936" watchObservedRunningTime="2025-12-12 00:21:18.955778145 +0000 UTC m=+913.733578978" Dec 12 00:21:18 crc kubenswrapper[4917]: I1212 00:21:18.985995 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-q9zcl" podStartSLOduration=11.597612139 podStartE2EDuration="40.985972425s" podCreationTimestamp="2025-12-12 00:20:38 +0000 UTC" firstStartedPulling="2025-12-12 00:20:43.577142724 +0000 UTC m=+878.354943537" lastFinishedPulling="2025-12-12 00:21:12.96550301 +0000 UTC m=+907.743303823" observedRunningTime="2025-12-12 00:21:18.985709438 +0000 UTC m=+913.763510271" watchObservedRunningTime="2025-12-12 00:21:18.985972425 +0000 UTC m=+913.763773238" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.008202 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b4zhp" podStartSLOduration=14.087125924 podStartE2EDuration="53.008173321s" podCreationTimestamp="2025-12-12 00:20:26 +0000 UTC" firstStartedPulling="2025-12-12 00:20:34.007777608 +0000 UTC m=+868.785578421" lastFinishedPulling="2025-12-12 00:21:12.928825005 +0000 UTC m=+907.706625818" observedRunningTime="2025-12-12 00:21:19.006138157 +0000 UTC m=+913.783938980" watchObservedRunningTime="2025-12-12 00:21:19.008173321 +0000 UTC m=+913.785974144" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.058269 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-8fb9d549d-7c4mf" podStartSLOduration=11.577783968 podStartE2EDuration="41.058253285s" podCreationTimestamp="2025-12-12 00:20:38 +0000 UTC" firstStartedPulling="2025-12-12 00:20:43.569816628 +0000 UTC m=+878.347617431" lastFinishedPulling="2025-12-12 00:21:13.050285935 +0000 UTC m=+907.828086748" observedRunningTime="2025-12-12 00:21:19.054937676 +0000 UTC m=+913.832738499" watchObservedRunningTime="2025-12-12 00:21:19.058253285 +0000 UTC m=+913.836054088" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.756971 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 00:21:19 crc kubenswrapper[4917]: E1212 00:21:19.757318 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="registry-server" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.757334 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="registry-server" Dec 12 00:21:19 crc kubenswrapper[4917]: E1212 00:21:19.757350 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="extract-utilities" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.757358 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="extract-utilities" Dec 12 00:21:19 crc kubenswrapper[4917]: E1212 00:21:19.757370 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="extract-content" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.757377 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="extract-content" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.757496 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5fe1c9-2bf3-41ac-8e26-166dca5770d4" containerName="registry-server" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.758450 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.762067 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.765363 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" event={"ID":"277c47e2-03cd-4ac4-9125-3379f89dc58c","Type":"ContainerStarted","Data":"9f0317ee9fd4f87d983db28d879abe70a3514f78dfa0823917cb70a6c784018c"} Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.766575 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.762432 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.763943 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.764174 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.764365 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-g6dkn" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.766386 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.766444 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.766971 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.802113 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.809309 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.809854 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.809973 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.810222 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.810361 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.810465 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.810591 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.810775 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.810941 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.811108 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.811274 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.811342 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.811368 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.811424 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.811531 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.857147 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" podStartSLOduration=19.621438679 podStartE2EDuration="54.857131852s" podCreationTimestamp="2025-12-12 00:20:25 +0000 UTC" firstStartedPulling="2025-12-12 00:20:34.006392461 +0000 UTC m=+868.784193274" lastFinishedPulling="2025-12-12 00:21:09.242085634 +0000 UTC m=+904.019886447" observedRunningTime="2025-12-12 00:21:19.854679726 +0000 UTC m=+914.632480569" watchObservedRunningTime="2025-12-12 00:21:19.857131852 +0000 UTC m=+914.634932655" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912498 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912561 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912603 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912630 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912699 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912729 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912749 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912776 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912807 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912826 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912849 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912884 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912905 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912925 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.912958 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.913273 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.913453 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.913608 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.913757 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.913950 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.914624 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.915013 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.915062 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.919230 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.919261 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.919582 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.919866 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.924085 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.924633 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:19 crc kubenswrapper[4917]: I1212 00:21:19.938085 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/6bda0b57-4afb-46a0-a754-2efd6aaa2a95-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"6bda0b57-4afb-46a0-a754-2efd6aaa2a95\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:20 crc kubenswrapper[4917]: I1212 00:21:20.087603 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:21:20 crc kubenswrapper[4917]: I1212 00:21:20.590716 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 00:21:20 crc kubenswrapper[4917]: I1212 00:21:20.773866 4917 generic.go:334] "Generic (PLEG): container finished" podID="277c47e2-03cd-4ac4-9125-3379f89dc58c" containerID="9f0317ee9fd4f87d983db28d879abe70a3514f78dfa0823917cb70a6c784018c" exitCode=0 Dec 12 00:21:20 crc kubenswrapper[4917]: I1212 00:21:20.773975 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" event={"ID":"277c47e2-03cd-4ac4-9125-3379f89dc58c","Type":"ContainerDied","Data":"9f0317ee9fd4f87d983db28d879abe70a3514f78dfa0823917cb70a6c784018c"} Dec 12 00:21:20 crc kubenswrapper[4917]: I1212 00:21:20.776131 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"6bda0b57-4afb-46a0-a754-2efd6aaa2a95","Type":"ContainerStarted","Data":"a11c666f8aa381ee193b3f834100f5a3560fdc05f849346f7281f7e08f93bb3d"} Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.288343 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.405450 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w6gk\" (UniqueName: \"kubernetes.io/projected/277c47e2-03cd-4ac4-9125-3379f89dc58c-kube-api-access-5w6gk\") pod \"277c47e2-03cd-4ac4-9125-3379f89dc58c\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.405897 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-util\") pod \"277c47e2-03cd-4ac4-9125-3379f89dc58c\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.406043 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-bundle\") pod \"277c47e2-03cd-4ac4-9125-3379f89dc58c\" (UID: \"277c47e2-03cd-4ac4-9125-3379f89dc58c\") " Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.407343 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-bundle" (OuterVolumeSpecName: "bundle") pod "277c47e2-03cd-4ac4-9125-3379f89dc58c" (UID: "277c47e2-03cd-4ac4-9125-3379f89dc58c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.413250 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277c47e2-03cd-4ac4-9125-3379f89dc58c-kube-api-access-5w6gk" (OuterVolumeSpecName: "kube-api-access-5w6gk") pod "277c47e2-03cd-4ac4-9125-3379f89dc58c" (UID: "277c47e2-03cd-4ac4-9125-3379f89dc58c"). InnerVolumeSpecName "kube-api-access-5w6gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.419591 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-util" (OuterVolumeSpecName: "util") pod "277c47e2-03cd-4ac4-9125-3379f89dc58c" (UID: "277c47e2-03cd-4ac4-9125-3379f89dc58c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.507504 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w6gk\" (UniqueName: \"kubernetes.io/projected/277c47e2-03cd-4ac4-9125-3379f89dc58c-kube-api-access-5w6gk\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.507756 4917 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-util\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.507847 4917 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/277c47e2-03cd-4ac4-9125-3379f89dc58c-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.812232 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" event={"ID":"277c47e2-03cd-4ac4-9125-3379f89dc58c","Type":"ContainerDied","Data":"6f2ada049e956f945299c05a7b628890309fe8f4228cfdf6cd32804b950955af"} Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.812281 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f2ada049e956f945299c05a7b628890309fe8f4228cfdf6cd32804b950955af" Dec 12 00:21:24 crc kubenswrapper[4917]: I1212 00:21:24.812406 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w" Dec 12 00:21:25 crc kubenswrapper[4917]: I1212 00:21:25.822950 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" event={"ID":"3192b300-476f-4127-a662-9636f89655c7","Type":"ContainerStarted","Data":"3942d61b9719819ed13d13b98b2ecec9cf84b05813214ee853cf4f644b1f6164"} Dec 12 00:21:25 crc kubenswrapper[4917]: I1212 00:21:25.850448 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h" podStartSLOduration=-9223371988.00435 podStartE2EDuration="48.850425918s" podCreationTimestamp="2025-12-12 00:20:37 +0000 UTC" firstStartedPulling="2025-12-12 00:20:39.41588257 +0000 UTC m=+874.193683383" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:21:25.846109712 +0000 UTC m=+920.623910545" watchObservedRunningTime="2025-12-12 00:21:25.850425918 +0000 UTC m=+920.628226731" Dec 12 00:21:26 crc kubenswrapper[4917]: I1212 00:21:26.841017 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:21:26 crc kubenswrapper[4917]: I1212 00:21:26.844255 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:21:26 crc kubenswrapper[4917]: I1212 00:21:26.853359 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gjwt" event={"ID":"dbe4b516-aa59-4d7a-9fbf-cf60e9627711","Type":"ContainerStarted","Data":"86a2afd83735bc0a4b051c24f64e80b62aee5fcba93a8a855913def662279a33"} Dec 12 00:21:26 crc kubenswrapper[4917]: I1212 00:21:26.925226 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:21:28 crc kubenswrapper[4917]: I1212 00:21:28.368397 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:21:29 crc kubenswrapper[4917]: I1212 00:21:29.091618 4917 generic.go:334] "Generic (PLEG): container finished" podID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerID="86a2afd83735bc0a4b051c24f64e80b62aee5fcba93a8a855913def662279a33" exitCode=0 Dec 12 00:21:29 crc kubenswrapper[4917]: I1212 00:21:29.092912 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gjwt" event={"ID":"dbe4b516-aa59-4d7a-9fbf-cf60e9627711","Type":"ContainerDied","Data":"86a2afd83735bc0a4b051c24f64e80b62aee5fcba93a8a855913def662279a33"} Dec 12 00:21:29 crc kubenswrapper[4917]: I1212 00:21:29.315552 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-q9zcl" Dec 12 00:21:31 crc kubenswrapper[4917]: I1212 00:21:31.339502 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gjwt" event={"ID":"dbe4b516-aa59-4d7a-9fbf-cf60e9627711","Type":"ContainerStarted","Data":"8da1d88ba740682988dcfd0732a00e81e9f46c5974046a4d297ef6235e37062f"} Dec 12 00:21:31 crc kubenswrapper[4917]: I1212 00:21:31.459394 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7gjwt" podStartSLOduration=27.814628101 podStartE2EDuration="39.45936956s" podCreationTimestamp="2025-12-12 00:20:52 +0000 UTC" firstStartedPulling="2025-12-12 00:21:18.746095278 +0000 UTC m=+913.523896091" lastFinishedPulling="2025-12-12 00:21:30.390836737 +0000 UTC m=+925.168637550" observedRunningTime="2025-12-12 00:21:31.456596745 +0000 UTC m=+926.234397558" watchObservedRunningTime="2025-12-12 00:21:31.45936956 +0000 UTC m=+926.237170383" Dec 12 00:21:31 crc kubenswrapper[4917]: I1212 00:21:31.706587 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4zhp"] Dec 12 00:21:31 crc kubenswrapper[4917]: I1212 00:21:31.706896 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b4zhp" podUID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" containerName="registry-server" containerID="cri-o://550ecd3f581933a30c2d0ea06841d25d72e7243cff95e4a2d8160ee05bee4400" gracePeriod=2 Dec 12 00:21:32 crc kubenswrapper[4917]: I1212 00:21:32.395978 4917 generic.go:334] "Generic (PLEG): container finished" podID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" containerID="550ecd3f581933a30c2d0ea06841d25d72e7243cff95e4a2d8160ee05bee4400" exitCode=0 Dec 12 00:21:32 crc kubenswrapper[4917]: I1212 00:21:32.396051 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zhp" event={"ID":"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95","Type":"ContainerDied","Data":"550ecd3f581933a30c2d0ea06841d25d72e7243cff95e4a2d8160ee05bee4400"} Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.060132 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.256519 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.256760 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.260142 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-utilities\") pod \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.260194 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-catalog-content\") pod \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.260264 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb9wv\" (UniqueName: \"kubernetes.io/projected/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-kube-api-access-fb9wv\") pod \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\" (UID: \"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95\") " Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.261326 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-utilities" (OuterVolumeSpecName: "utilities") pod "1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" (UID: "1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.273031 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-kube-api-access-fb9wv" (OuterVolumeSpecName: "kube-api-access-fb9wv") pod "1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" (UID: "1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95"). InnerVolumeSpecName "kube-api-access-fb9wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.323158 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz"] Dec 12 00:21:33 crc kubenswrapper[4917]: E1212 00:21:33.323422 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" containerName="extract-utilities" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.323455 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" containerName="extract-utilities" Dec 12 00:21:33 crc kubenswrapper[4917]: E1212 00:21:33.323488 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277c47e2-03cd-4ac4-9125-3379f89dc58c" containerName="extract" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.323496 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="277c47e2-03cd-4ac4-9125-3379f89dc58c" containerName="extract" Dec 12 00:21:33 crc kubenswrapper[4917]: E1212 00:21:33.323508 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277c47e2-03cd-4ac4-9125-3379f89dc58c" containerName="pull" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.323517 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="277c47e2-03cd-4ac4-9125-3379f89dc58c" containerName="pull" Dec 12 00:21:33 crc kubenswrapper[4917]: E1212 00:21:33.323531 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" containerName="registry-server" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.323538 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" containerName="registry-server" Dec 12 00:21:33 crc kubenswrapper[4917]: E1212 00:21:33.323547 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277c47e2-03cd-4ac4-9125-3379f89dc58c" containerName="util" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.323553 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="277c47e2-03cd-4ac4-9125-3379f89dc58c" containerName="util" Dec 12 00:21:33 crc kubenswrapper[4917]: E1212 00:21:33.323565 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" containerName="extract-content" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.323573 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" containerName="extract-content" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.330168 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="277c47e2-03cd-4ac4-9125-3379f89dc58c" containerName="extract" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.330206 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" containerName="registry-server" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.330832 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.336842 4917 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-cr4k5" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.337147 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.337299 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.365468 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7wp7\" (UniqueName: \"kubernetes.io/projected/0c70229c-6f38-4e44-8c1c-72953f82905e-kube-api-access-b7wp7\") pod \"cert-manager-operator-controller-manager-5446d6888b-qqzzz\" (UID: \"0c70229c-6f38-4e44-8c1c-72953f82905e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.365581 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c70229c-6f38-4e44-8c1c-72953f82905e-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-qqzzz\" (UID: \"0c70229c-6f38-4e44-8c1c-72953f82905e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.365739 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.365818 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb9wv\" (UniqueName: \"kubernetes.io/projected/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-kube-api-access-fb9wv\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.384758 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz"] Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.386933 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" (UID: "1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.434789 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zhp" event={"ID":"1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95","Type":"ContainerDied","Data":"effa8be4dfcc4666576b248a27b6788f56d51ec29b357810bd3be745acfbdd9e"} Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.434869 4917 scope.go:117] "RemoveContainer" containerID="550ecd3f581933a30c2d0ea06841d25d72e7243cff95e4a2d8160ee05bee4400" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.434993 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4zhp" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.467090 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c70229c-6f38-4e44-8c1c-72953f82905e-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-qqzzz\" (UID: \"0c70229c-6f38-4e44-8c1c-72953f82905e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.467226 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7wp7\" (UniqueName: \"kubernetes.io/projected/0c70229c-6f38-4e44-8c1c-72953f82905e-kube-api-access-b7wp7\") pod \"cert-manager-operator-controller-manager-5446d6888b-qqzzz\" (UID: \"0c70229c-6f38-4e44-8c1c-72953f82905e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.467329 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.468853 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c70229c-6f38-4e44-8c1c-72953f82905e-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-qqzzz\" (UID: \"0c70229c-6f38-4e44-8c1c-72953f82905e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.472908 4917 scope.go:117] "RemoveContainer" containerID="adb465daa37f85f57b412f7a6f2b09d331eda177413b1ba38bd0e14c54305bd5" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.484173 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4zhp"] Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.492376 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b4zhp"] Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.501240 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.502226 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.510727 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-tfjwq" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.511023 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.511214 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.511372 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.543692 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7wp7\" (UniqueName: \"kubernetes.io/projected/0c70229c-6f38-4e44-8c1c-72953f82905e-kube-api-access-b7wp7\") pod \"cert-manager-operator-controller-manager-5446d6888b-qqzzz\" (UID: \"0c70229c-6f38-4e44-8c1c-72953f82905e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.543861 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.616277 4917 scope.go:117] "RemoveContainer" containerID="0c4a36305bff8369aa38f56bbd86cca602037f877cbbf3848d443c58606c81c1" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.635808 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95" path="/var/lib/kubelet/pods/1b26f46e-4a5c-4ac6-96e0-94e5c77fcf95/volumes" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.665463 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.674754 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.674813 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.674858 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.674885 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.674945 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.674967 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.674981 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.675000 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwv2l\" (UniqueName: \"kubernetes.io/projected/300d0110-d215-464f-b5bb-21d6cd0749e3-kube-api-access-vwv2l\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.675020 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.675035 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.675055 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.675074 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776488 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776555 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776580 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776606 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwv2l\" (UniqueName: \"kubernetes.io/projected/300d0110-d215-464f-b5bb-21d6cd0749e3-kube-api-access-vwv2l\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776632 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776672 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776701 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776720 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776740 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776760 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776800 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.776823 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.777623 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.777827 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.778341 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.778361 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.778504 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.778550 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.778895 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.778926 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.779477 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.788210 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.793262 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.831434 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwv2l\" (UniqueName: \"kubernetes.io/projected/300d0110-d215-464f-b5bb-21d6cd0749e3-kube-api-access-vwv2l\") pod \"service-telemetry-operator-1-build\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:33 crc kubenswrapper[4917]: I1212 00:21:33.880732 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:21:34 crc kubenswrapper[4917]: I1212 00:21:34.325031 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7gjwt" podUID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerName="registry-server" probeResult="failure" output=< Dec 12 00:21:34 crc kubenswrapper[4917]: timeout: failed to connect service ":50051" within 1s Dec 12 00:21:34 crc kubenswrapper[4917]: > Dec 12 00:21:34 crc kubenswrapper[4917]: I1212 00:21:34.387730 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz"] Dec 12 00:21:34 crc kubenswrapper[4917]: W1212 00:21:34.426492 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c70229c_6f38_4e44_8c1c_72953f82905e.slice/crio-b563a2b6278cbe832d996e810a8c2fc523c28ed38d7d980b4146361394d74971 WatchSource:0}: Error finding container b563a2b6278cbe832d996e810a8c2fc523c28ed38d7d980b4146361394d74971: Status 404 returned error can't find the container with id b563a2b6278cbe832d996e810a8c2fc523c28ed38d7d980b4146361394d74971 Dec 12 00:21:34 crc kubenswrapper[4917]: I1212 00:21:34.495795 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" event={"ID":"0c70229c-6f38-4e44-8c1c-72953f82905e","Type":"ContainerStarted","Data":"b563a2b6278cbe832d996e810a8c2fc523c28ed38d7d980b4146361394d74971"} Dec 12 00:21:34 crc kubenswrapper[4917]: I1212 00:21:34.819986 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 12 00:21:35 crc kubenswrapper[4917]: I1212 00:21:35.520980 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"300d0110-d215-464f-b5bb-21d6cd0749e3","Type":"ContainerStarted","Data":"fe5c5d599f9f7108f7f89f90c211fdc55a64ceaffb16a41835afc58a0ce4153d"} Dec 12 00:21:43 crc kubenswrapper[4917]: I1212 00:21:43.425182 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:21:43 crc kubenswrapper[4917]: I1212 00:21:43.445850 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 12 00:21:43 crc kubenswrapper[4917]: I1212 00:21:43.993909 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.114286 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.115514 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.117928 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.118584 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.118972 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.138968 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.158927 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.159004 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.159031 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.159054 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.159084 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.159103 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.159125 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gljn9\" (UniqueName: \"kubernetes.io/projected/17805f64-0014-4f94-b468-7c869f8a62fe-kube-api-access-gljn9\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.159153 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.159176 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.159221 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.159265 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.159287 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.260619 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.261029 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.261085 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.261114 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.261141 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.261163 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.261191 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.261216 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.261245 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gljn9\" (UniqueName: \"kubernetes.io/projected/17805f64-0014-4f94-b468-7c869f8a62fe-kube-api-access-gljn9\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.261271 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.261289 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.261340 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.262183 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.262396 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.262636 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.262979 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.263084 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.263167 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.263185 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.263325 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.263729 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.267944 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.283726 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.290772 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gljn9\" (UniqueName: \"kubernetes.io/projected/17805f64-0014-4f94-b468-7c869f8a62fe-kube-api-access-gljn9\") pod \"service-telemetry-operator-2-build\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.436100 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.694934 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gjwt"] Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.695168 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7gjwt" podUID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerName="registry-server" containerID="cri-o://8da1d88ba740682988dcfd0732a00e81e9f46c5974046a4d297ef6235e37062f" gracePeriod=2 Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.972226 4917 patch_prober.go:28] interesting pod/route-controller-manager-67cd78cfc7-cv9hd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 00:21:45 crc kubenswrapper[4917]: I1212 00:21:45.972300 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-67cd78cfc7-cv9hd" podUID="3d7be2ca-e1fa-495a-bfd5-654cca46c0df" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 00:21:50 crc kubenswrapper[4917]: I1212 00:21:50.891694 4917 generic.go:334] "Generic (PLEG): container finished" podID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerID="8da1d88ba740682988dcfd0732a00e81e9f46c5974046a4d297ef6235e37062f" exitCode=0 Dec 12 00:21:50 crc kubenswrapper[4917]: I1212 00:21:50.891920 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gjwt" event={"ID":"dbe4b516-aa59-4d7a-9fbf-cf60e9627711","Type":"ContainerDied","Data":"8da1d88ba740682988dcfd0732a00e81e9f46c5974046a4d297ef6235e37062f"} Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.477745 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.620389 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-utilities\") pod \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.620879 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt4wn\" (UniqueName: \"kubernetes.io/projected/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-kube-api-access-pt4wn\") pod \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.620969 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-catalog-content\") pod \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\" (UID: \"dbe4b516-aa59-4d7a-9fbf-cf60e9627711\") " Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.622834 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-utilities" (OuterVolumeSpecName: "utilities") pod "dbe4b516-aa59-4d7a-9fbf-cf60e9627711" (UID: "dbe4b516-aa59-4d7a-9fbf-cf60e9627711"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.648957 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-kube-api-access-pt4wn" (OuterVolumeSpecName: "kube-api-access-pt4wn") pod "dbe4b516-aa59-4d7a-9fbf-cf60e9627711" (UID: "dbe4b516-aa59-4d7a-9fbf-cf60e9627711"). InnerVolumeSpecName "kube-api-access-pt4wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.676934 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbe4b516-aa59-4d7a-9fbf-cf60e9627711" (UID: "dbe4b516-aa59-4d7a-9fbf-cf60e9627711"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.722711 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.722742 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt4wn\" (UniqueName: \"kubernetes.io/projected/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-kube-api-access-pt4wn\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.722754 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe4b516-aa59-4d7a-9fbf-cf60e9627711-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.927781 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gjwt" event={"ID":"dbe4b516-aa59-4d7a-9fbf-cf60e9627711","Type":"ContainerDied","Data":"099c3786fdda5529412445234ff458f3fa665d6bcf96d921c4cd83133d0b762e"} Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.927903 4917 scope.go:117] "RemoveContainer" containerID="8da1d88ba740682988dcfd0732a00e81e9f46c5974046a4d297ef6235e37062f" Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.929765 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gjwt" Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.947222 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.971656 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gjwt"] Dec 12 00:21:52 crc kubenswrapper[4917]: I1212 00:21:52.973152 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7gjwt"] Dec 12 00:21:53 crc kubenswrapper[4917]: I1212 00:21:53.610888 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" path="/var/lib/kubelet/pods/dbe4b516-aa59-4d7a-9fbf-cf60e9627711/volumes" Dec 12 00:21:59 crc kubenswrapper[4917]: I1212 00:21:59.642483 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:21:59 crc kubenswrapper[4917]: I1212 00:21:59.642863 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:22:03 crc kubenswrapper[4917]: E1212 00:22:03.821756 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911" Dec 12 00:22:03 crc kubenswrapper[4917]: E1212 00:22:03.822548 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-operator,Image:registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911,Command:[/usr/bin/cert-manager-operator],Args:[start --v=$(OPERATOR_LOG_LEVEL) --trusted-ca-configmap=$(TRUSTED_CA_CONFIGMAP_NAME) --cloud-credentials-secret=$(CLOUD_CREDENTIALS_SECRET_NAME) --unsupported-addon-features=$(UNSUPPORTED_ADDON_FEATURES)],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cert-manager-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_WEBHOOK,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CA_INJECTOR,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CONTROLLER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ACMESOLVER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-acmesolver-rhel9@sha256:ba937fc4b9eee31422914352c11a45b90754ba4fbe490ea45249b90afdc4e0a7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ISTIOCSR,Value:registry.redhat.io/cert-manager/cert-manager-istio-csr-rhel9@sha256:af1ac813b8ee414ef215936f05197bc498bccbd540f3e2a93cb522221ba112bc,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.18.3,ValueFrom:nil,},EnvVar{Name:ISTIOCSR_OPERAND_IMAGE_VERSION,Value:0.14.2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:1.18.0,ValueFrom:nil,},EnvVar{Name:OPERATOR_LOG_LEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:TRUSTED_CA_CONFIGMAP_NAME,Value:,ValueFrom:nil,},EnvVar{Name:CLOUD_CREDENTIALS_SECRET_NAME,Value:,ValueFrom:nil,},EnvVar{Name:UNSUPPORTED_ADDON_FEATURES,Value:,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cert-manager-operator.v1.18.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{33554432 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7wp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-operator-controller-manager-5446d6888b-qqzzz_cert-manager-operator(0c70229c-6f38-4e44-8c1c-72953f82905e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:22:03 crc kubenswrapper[4917]: E1212 00:22:03.823701 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" podUID="0c70229c-6f38-4e44-8c1c-72953f82905e" Dec 12 00:22:04 crc kubenswrapper[4917]: I1212 00:22:04.314142 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"17805f64-0014-4f94-b468-7c869f8a62fe","Type":"ContainerStarted","Data":"17506af75a1b43312d1d55c7b33f77cf8e844bdbae914f33605503ac922adc3a"} Dec 12 00:22:04 crc kubenswrapper[4917]: E1212 00:22:04.315600 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911\\\"\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" podUID="0c70229c-6f38-4e44-8c1c-72953f82905e" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.077636 4917 scope.go:117] "RemoveContainer" containerID="86a2afd83735bc0a4b051c24f64e80b62aee5fcba93a8a855913def662279a33" Dec 12 00:22:10 crc kubenswrapper[4917]: E1212 00:22:10.102055 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe" Dec 12 00:22:10 crc kubenswrapper[4917]: E1212 00:22:10.102320 4917 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 12 00:22:10 crc kubenswrapper[4917]: init container &Container{Name:manage-dockerfile,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe,Command:[],Args:[openshift-manage-dockerfile --v=0],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:BUILD,Value:{"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"service-telemetry-operator-1","namespace":"service-telemetry","uid":"167cf47f-3f73-4c62-94d7-1c2089fa93fb","resourceVersion":"33514","generation":1,"creationTimestamp":"2025-12-12T00:21:33Z","labels":{"build":"service-telemetry-operator","buildconfig":"service-telemetry-operator","openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"service-telemetry-operator","uid":"4d99e016-1374-41a7-b95b-ead664a28562","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2025-12-12T00:21:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:build":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d99e016-1374-41a7-b95b-ead664a28562\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:dockerfile":{},"f:type":{}},"f:strategy":{"f:dockerStrategy":{".":{},"f:from":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM quay.io/operator-framework/ansible-operator:v1.36.1\n\n# temporarily switch to root user to adjust image layers\nUSER 0\n# Upstream CI builds need the additional EPEL sources for python3-passlib and python3-bcrypt but have no working repos to install epel-release\n# NO_PROXY is undefined in upstream CI builds, but defined (usually blank) during openshift builds (a possibly brittle hack)\nRUN bash -c -- 'if [ \"${NO_PROXY:-__ZZZZZ}\" == \"__ZZZZZ\" ]; then echo \"Applying upstream EPEL hacks\" \u0026\u0026 echo -e \"-----BEGIN PGP PUBLIC KEY BLOCK-----\\nmQINBGE3mOsBEACsU+XwJWDJVkItBaugXhXIIkb9oe+7aadELuVo0kBmc3HXt/Yp\\nCJW9hHEiGZ6z2jwgPqyJjZhCvcAWvgzKcvqE+9i0NItV1rzfxrBe2BtUtZmVcuE6\\n2b+SPfxQ2Hr8llaawRjt8BCFX/ZzM4/1Qk+EzlfTcEcpkMf6wdO7kD6ulBk/tbsW\\nDHX2lNcxszTf+XP9HXHWJlA2xBfP+Dk4gl4DnO2Y1xR0OSywE/QtvEbN5cY94ieu\\nn7CBy29AleMhmbnx9pw3NyxcFIAsEZHJoU4ZW9ulAJ/ogttSyAWeacW7eJGW31/Z\\n39cS+I4KXJgeGRI20RmpqfH0tuT+X5Da59YpjYxkbhSK3HYBVnNPhoJFUc2j5iKy\\nXLgkapu1xRnEJhw05kr4LCbud0NTvfecqSqa+59kuVc+zWmfTnGTYc0PXZ6Oa3rK\\n44UOmE6eAT5zd/ToleDO0VesN+EO7CXfRsm7HWGpABF5wNK3vIEF2uRr2VJMvgqS\\n9eNwhJyOzoca4xFSwCkc6dACGGkV+CqhufdFBhmcAsUotSxe3zmrBjqA0B/nxIvH\\nDVgOAMnVCe+Lmv8T0mFgqZSJdIUdKjnOLu/GRFhjDKIak4jeMBMTYpVnU+HhMHLq\\nuDiZkNEvEEGhBQmZuI8J55F/a6UURnxUwT3piyi3Pmr2IFD7ahBxPzOBCQARAQAB\\ntCdGZWRvcmEgKGVwZWw5KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAk4EEwEI\\nADgWIQT/itE0RZcQbs6BO5GKOHK/MihGfAUCYTeY6wIbDwULCQgHAgYVCgkICwIE\\nFgIDAQIeAQIXgAAKCRCKOHK/MihGfFX/EACBPWv20+ttYu1A5WvtHJPzwbj0U4yF\\n3zTQpBglQ2UfkRpYdipTlT3Ih6j5h2VmgRPtINCc/ZE28adrWpBoeFIS2YAKOCLC\\nnZYtHl2nCoLq1U7FSttUGsZ/t8uGCBgnugTfnIYcmlP1jKKA6RJAclK89evDQX5n\\nR9ZD+Cq3CBMlttvSTCht0qQVlwycedH8iWyYgP/mF0W35BIn7NuuZwWhgR00n/VG\\n4nbKPOzTWbsP45awcmivdrS74P6mL84WfkghipdmcoyVb1B8ZP4Y/Ke0RXOnLhNe\\nCfrXXvuW+Pvg2RTfwRDtehGQPAgXbmLmz2ZkV69RGIr54HJv84NDbqZovRTMr7gL\\n9k3ciCzXCiYQgM8yAyGHV0KEhFSQ1HV7gMnt9UmxbxBE2pGU7vu3CwjYga5DpwU7\\nw5wu1TmM5KgZtZvuWOTDnqDLf0cKoIbW8FeeCOn24elcj32bnQDuF9DPey1mqcvT\\n/yEo/Ushyz6CVYxN8DGgcy2M9JOsnmjDx02h6qgWGWDuKgb9jZrvRedpAQCeemEd\\nfhEs6ihqVxRFl16HxC4EVijybhAL76SsM2nbtIqW1apBQJQpXWtQwwdvgTVpdEtE\\nr4ArVJYX5LrswnWEQMOelugUG6S3ZjMfcyOa/O0364iY73vyVgaYK+2XtT2usMux\\nVL469Kj5m13T6w==\\n=Mjs/\\n-----END PGP PUBLIC KEY BLOCK-----\" \u003e /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9 \u0026\u0026 echo -e \"[epel]\\nname=Extra Packages for Enterprise Linux 9 - \\$basearch\\nmetalink=https://mirrors.fedoraproject.org/metalink?repo=epel-9\u0026arch=\\$basearch\u0026infra=\\$infra\u0026content=\\$contentdir\\nenabled=1\\ngpgcheck=1\\ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9\" \u003e /etc/yum.repos.d/epel.repo; fi'\n\n# update the base image to allow forward-looking optimistic updates during the testing phase, with the added benefit of helping move closer to passing security scans.\n# -- excludes ansible so it remains at 2.9 tag as shipped with the base image\n# -- installs python3-passlib and python3-bcrypt for oauth-proxy interface\n# -- cleans up the cached data from dnf to keep the image as small as possible\nRUN dnf update -y --exclude=ansible* \u0026\u0026 dnf install -y python3-passlib python3-bcrypt \u0026\u0026 dnf clean all \u0026\u0026 rm -rf /var/cache/dnf\n\nCOPY requirements.yml ${HOME}/requirements.yml\nRUN ansible-galaxy collection install -r ${HOME}/requirements.yml \\\n \u0026\u0026 chmod -R ug+rwx ${HOME}/.ansible\n\n# switch back to user 1001 when running the base image (non-root)\nUSER 1001\n\n# copy in required artifacts for the operator\nCOPY watches.yaml ${HOME}/watches.yaml\nCOPY roles/ ${HOME}/roles/\n"},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"quay.io/operator-framework/ansible-operator@sha256:f732a6137c30c513cf410c0ef36b8f0ce528957a0007a68ecaa9f3ebd3ba48dc"},"pullSecret":{"name":"builder-dockercfg-tfjwq"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest"},"pushSecret":{"name":"builder-dockercfg-tfjwq"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Build configuration change"}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest","config":{"kind":"BuildConfig","namespace":"service-telemetry","name":"service-telemetry-operator"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2025-12-12T00:21:33Z","lastTransitionTime":"2025-12-12T00:21:33Z"}]}} Dec 12 00:22:10 crc kubenswrapper[4917]: ,ValueFrom:nil,},EnvVar{Name:LANG,Value:C.utf8,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/registries.conf,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_DIR_PATH,Value:/var/run/configs/openshift.io/build-system/registries.d,ValueFrom:nil,},EnvVar{Name:BUILD_SIGNATURE_POLICY_PATH,Value:/var/run/configs/openshift.io/build-system/policy.json,ValueFrom:nil,},EnvVar{Name:BUILD_STORAGE_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/storage.conf,ValueFrom:nil,},EnvVar{Name:BUILD_BLOBCACHE_DIR,Value:/var/cache/blobs,ValueFrom:nil,},EnvVar{Name:HTTP_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:http_proxy,Value:,ValueFrom:nil,},EnvVar{Name:HTTPS_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:https_proxy,Value:,ValueFrom:nil,},EnvVar{Name:NO_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:no_proxy,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:buildworkdir,ReadOnly:false,MountPath:/tmp/build,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-system-configs,ReadOnly:true,MountPath:/var/run/configs/openshift.io/build-system,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-proxy-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-blob-cache,ReadOnly:false,MountPath:/var/cache/blobs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwv2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[CHOWN DAC_OVERRIDE],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-1-build_service-telemetry(300d0110-d215-464f-b5bb-21d6cd0749e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Dec 12 00:22:10 crc kubenswrapper[4917]: > logger="UnhandledError" Dec 12 00:22:10 crc kubenswrapper[4917]: E1212 00:22:10.103488 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manage-dockerfile\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-1-build" podUID="300d0110-d215-464f-b5bb-21d6cd0749e3" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.491906 4917 scope.go:117] "RemoveContainer" containerID="b66ff431b8c2228c0a0bd88a1a4110f07b2793677449a9e43a5848e3fcd0c31d" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.734990 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:22:10 crc kubenswrapper[4917]: E1212 00:22:10.743170 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Dec 12 00:22:10 crc kubenswrapper[4917]: E1212 00:22:10.743372 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(6bda0b57-4afb-46a0-a754-2efd6aaa2a95): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 00:22:10 crc kubenswrapper[4917]: E1212 00:22:10.744583 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="6bda0b57-4afb-46a0-a754-2efd6aaa2a95" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.755589 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-buildworkdir\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.755663 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwv2l\" (UniqueName: \"kubernetes.io/projected/300d0110-d215-464f-b5bb-21d6cd0749e3-kube-api-access-vwv2l\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.755701 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-run\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.755735 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-push\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.755771 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-root\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.755801 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-ca-bundles\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.755831 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-node-pullsecrets\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.755858 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-build-blob-cache\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.755936 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.756273 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.756620 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.757002 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-buildcachedir\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.757178 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-proxy-ca-bundles\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.757249 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-pull\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.757339 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-system-configs\") pod \"300d0110-d215-464f-b5bb-21d6cd0749e3\" (UID: \"300d0110-d215-464f-b5bb-21d6cd0749e3\") " Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.757064 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.757213 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.757313 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.757730 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.757422 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.758410 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.758474 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.758491 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.758515 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.758527 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.758540 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/300d0110-d215-464f-b5bb-21d6cd0749e3-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.758552 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/300d0110-d215-464f-b5bb-21d6cd0749e3-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.758567 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.759406 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.768312 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.768835 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/300d0110-d215-464f-b5bb-21d6cd0749e3-kube-api-access-vwv2l" (OuterVolumeSpecName: "kube-api-access-vwv2l") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "kube-api-access-vwv2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.773265 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "300d0110-d215-464f-b5bb-21d6cd0749e3" (UID: "300d0110-d215-464f-b5bb-21d6cd0749e3"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.861577 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.861625 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/300d0110-d215-464f-b5bb-21d6cd0749e3-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.861638 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/300d0110-d215-464f-b5bb-21d6cd0749e3-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:10 crc kubenswrapper[4917]: I1212 00:22:10.861663 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwv2l\" (UniqueName: \"kubernetes.io/projected/300d0110-d215-464f-b5bb-21d6cd0749e3-kube-api-access-vwv2l\") on node \"crc\" DevicePath \"\"" Dec 12 00:22:11 crc kubenswrapper[4917]: I1212 00:22:11.434105 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"300d0110-d215-464f-b5bb-21d6cd0749e3","Type":"ContainerDied","Data":"fe5c5d599f9f7108f7f89f90c211fdc55a64ceaffb16a41835afc58a0ce4153d"} Dec 12 00:22:11 crc kubenswrapper[4917]: I1212 00:22:11.434178 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 12 00:22:11 crc kubenswrapper[4917]: I1212 00:22:11.438281 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"17805f64-0014-4f94-b468-7c869f8a62fe","Type":"ContainerStarted","Data":"e1ab4279f96fd91ff4d17ece2e6358a970d85aedbedb91e9d42e50e1e6979a85"} Dec 12 00:22:11 crc kubenswrapper[4917]: E1212 00:22:11.440095 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="6bda0b57-4afb-46a0-a754-2efd6aaa2a95" Dec 12 00:22:11 crc kubenswrapper[4917]: I1212 00:22:11.672942 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 12 00:22:11 crc kubenswrapper[4917]: I1212 00:22:11.680774 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 12 00:22:11 crc kubenswrapper[4917]: I1212 00:22:11.688047 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 00:22:11 crc kubenswrapper[4917]: I1212 00:22:11.694420 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 00:22:12 crc kubenswrapper[4917]: E1212 00:22:12.445295 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="6bda0b57-4afb-46a0-a754-2efd6aaa2a95" Dec 12 00:22:13 crc kubenswrapper[4917]: E1212 00:22:13.450780 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="6bda0b57-4afb-46a0-a754-2efd6aaa2a95" Dec 12 00:22:13 crc kubenswrapper[4917]: I1212 00:22:13.620935 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="300d0110-d215-464f-b5bb-21d6cd0749e3" path="/var/lib/kubelet/pods/300d0110-d215-464f-b5bb-21d6cd0749e3/volumes" Dec 12 00:22:17 crc kubenswrapper[4917]: I1212 00:22:17.519440 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" event={"ID":"0c70229c-6f38-4e44-8c1c-72953f82905e","Type":"ContainerStarted","Data":"f87bb0203f899e102bcf28363f4ac6fac8fddf3dc43f6d2b5c1b882eb69d5db2"} Dec 12 00:22:20 crc kubenswrapper[4917]: I1212 00:22:20.979947 4917 generic.go:334] "Generic (PLEG): container finished" podID="17805f64-0014-4f94-b468-7c869f8a62fe" containerID="e1ab4279f96fd91ff4d17ece2e6358a970d85aedbedb91e9d42e50e1e6979a85" exitCode=0 Dec 12 00:22:20 crc kubenswrapper[4917]: I1212 00:22:20.980271 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"17805f64-0014-4f94-b468-7c869f8a62fe","Type":"ContainerDied","Data":"e1ab4279f96fd91ff4d17ece2e6358a970d85aedbedb91e9d42e50e1e6979a85"} Dec 12 00:22:20 crc kubenswrapper[4917]: I1212 00:22:20.986146 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-qqzzz" podStartSLOduration=5.871733543 podStartE2EDuration="47.986120823s" podCreationTimestamp="2025-12-12 00:21:33 +0000 UTC" firstStartedPulling="2025-12-12 00:21:34.458415327 +0000 UTC m=+929.236216140" lastFinishedPulling="2025-12-12 00:22:16.572802617 +0000 UTC m=+971.350603420" observedRunningTime="2025-12-12 00:22:17.543927345 +0000 UTC m=+972.321728158" watchObservedRunningTime="2025-12-12 00:22:20.986120823 +0000 UTC m=+975.763921646" Dec 12 00:22:20 crc kubenswrapper[4917]: I1212 00:22:20.987550 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-nvcnr"] Dec 12 00:22:20 crc kubenswrapper[4917]: E1212 00:22:20.987953 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerName="extract-content" Dec 12 00:22:20 crc kubenswrapper[4917]: I1212 00:22:20.988078 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerName="extract-content" Dec 12 00:22:20 crc kubenswrapper[4917]: E1212 00:22:20.988153 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerName="registry-server" Dec 12 00:22:20 crc kubenswrapper[4917]: I1212 00:22:20.988215 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerName="registry-server" Dec 12 00:22:20 crc kubenswrapper[4917]: E1212 00:22:20.988298 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerName="extract-utilities" Dec 12 00:22:20 crc kubenswrapper[4917]: I1212 00:22:20.988359 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerName="extract-utilities" Dec 12 00:22:20 crc kubenswrapper[4917]: I1212 00:22:20.988540 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe4b516-aa59-4d7a-9fbf-cf60e9627711" containerName="registry-server" Dec 12 00:22:20 crc kubenswrapper[4917]: I1212 00:22:20.989467 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" Dec 12 00:22:20 crc kubenswrapper[4917]: I1212 00:22:20.999342 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-nvcnr"] Dec 12 00:22:21 crc kubenswrapper[4917]: I1212 00:22:21.003335 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 12 00:22:21 crc kubenswrapper[4917]: I1212 00:22:21.003615 4917 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-zr8kw" Dec 12 00:22:21 crc kubenswrapper[4917]: I1212 00:22:21.003812 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 12 00:22:21 crc kubenswrapper[4917]: I1212 00:22:21.149528 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c7107a1-63ed-46dd-b721-8eccfd351246-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-nvcnr\" (UID: \"6c7107a1-63ed-46dd-b721-8eccfd351246\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" Dec 12 00:22:21 crc kubenswrapper[4917]: I1212 00:22:21.149963 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zttpl\" (UniqueName: \"kubernetes.io/projected/6c7107a1-63ed-46dd-b721-8eccfd351246-kube-api-access-zttpl\") pod \"cert-manager-webhook-f4fb5df64-nvcnr\" (UID: \"6c7107a1-63ed-46dd-b721-8eccfd351246\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" Dec 12 00:22:21 crc kubenswrapper[4917]: I1212 00:22:21.251750 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zttpl\" (UniqueName: \"kubernetes.io/projected/6c7107a1-63ed-46dd-b721-8eccfd351246-kube-api-access-zttpl\") pod \"cert-manager-webhook-f4fb5df64-nvcnr\" (UID: \"6c7107a1-63ed-46dd-b721-8eccfd351246\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" Dec 12 00:22:21 crc kubenswrapper[4917]: I1212 00:22:21.252159 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c7107a1-63ed-46dd-b721-8eccfd351246-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-nvcnr\" (UID: \"6c7107a1-63ed-46dd-b721-8eccfd351246\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" Dec 12 00:22:21 crc kubenswrapper[4917]: I1212 00:22:21.276526 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zttpl\" (UniqueName: \"kubernetes.io/projected/6c7107a1-63ed-46dd-b721-8eccfd351246-kube-api-access-zttpl\") pod \"cert-manager-webhook-f4fb5df64-nvcnr\" (UID: \"6c7107a1-63ed-46dd-b721-8eccfd351246\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" Dec 12 00:22:21 crc kubenswrapper[4917]: I1212 00:22:21.282031 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c7107a1-63ed-46dd-b721-8eccfd351246-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-nvcnr\" (UID: \"6c7107a1-63ed-46dd-b721-8eccfd351246\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" Dec 12 00:22:21 crc kubenswrapper[4917]: I1212 00:22:21.321602 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" Dec 12 00:22:21 crc kubenswrapper[4917]: I1212 00:22:21.658679 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-nvcnr"] Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.130191 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" event={"ID":"6c7107a1-63ed-46dd-b721-8eccfd351246","Type":"ContainerStarted","Data":"f56fcedd0dba573252f06b7bd0a4a3fb1133b17367fd53b30c157150dd1ff223"} Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.301349 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2"] Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.302578 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.304463 4917 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8fmgn" Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.321712 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2"] Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.494446 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv7n4\" (UniqueName: \"kubernetes.io/projected/5772c7d5-7528-4639-9885-ac71feeed2fa-kube-api-access-vv7n4\") pod \"cert-manager-cainjector-855d9ccff4-2tfk2\" (UID: \"5772c7d5-7528-4639-9885-ac71feeed2fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.494605 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5772c7d5-7528-4639-9885-ac71feeed2fa-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2tfk2\" (UID: \"5772c7d5-7528-4639-9885-ac71feeed2fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.595799 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv7n4\" (UniqueName: \"kubernetes.io/projected/5772c7d5-7528-4639-9885-ac71feeed2fa-kube-api-access-vv7n4\") pod \"cert-manager-cainjector-855d9ccff4-2tfk2\" (UID: \"5772c7d5-7528-4639-9885-ac71feeed2fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.596303 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5772c7d5-7528-4639-9885-ac71feeed2fa-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2tfk2\" (UID: \"5772c7d5-7528-4639-9885-ac71feeed2fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.620616 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5772c7d5-7528-4639-9885-ac71feeed2fa-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2tfk2\" (UID: \"5772c7d5-7528-4639-9885-ac71feeed2fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.626567 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv7n4\" (UniqueName: \"kubernetes.io/projected/5772c7d5-7528-4639-9885-ac71feeed2fa-kube-api-access-vv7n4\") pod \"cert-manager-cainjector-855d9ccff4-2tfk2\" (UID: \"5772c7d5-7528-4639-9885-ac71feeed2fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" Dec 12 00:22:22 crc kubenswrapper[4917]: I1212 00:22:22.924038 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" Dec 12 00:22:23 crc kubenswrapper[4917]: I1212 00:22:23.477257 4917 generic.go:334] "Generic (PLEG): container finished" podID="17805f64-0014-4f94-b468-7c869f8a62fe" containerID="7efc3be34275d0657b566456fc9d58995c75bf0d10c71b829f1bd29fa6ec28f7" exitCode=0 Dec 12 00:22:23 crc kubenswrapper[4917]: I1212 00:22:23.477748 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"17805f64-0014-4f94-b468-7c869f8a62fe","Type":"ContainerDied","Data":"7efc3be34275d0657b566456fc9d58995c75bf0d10c71b829f1bd29fa6ec28f7"} Dec 12 00:22:23 crc kubenswrapper[4917]: I1212 00:22:23.562481 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_17805f64-0014-4f94-b468-7c869f8a62fe/manage-dockerfile/0.log" Dec 12 00:22:23 crc kubenswrapper[4917]: I1212 00:22:23.889333 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2"] Dec 12 00:22:24 crc kubenswrapper[4917]: I1212 00:22:24.534432 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"17805f64-0014-4f94-b468-7c869f8a62fe","Type":"ContainerStarted","Data":"8e5d68579cc3390e96aa35b1af058eb7843324d3491f512614ad99e0bc0688cd"} Dec 12 00:22:24 crc kubenswrapper[4917]: I1212 00:22:24.540506 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" event={"ID":"5772c7d5-7528-4639-9885-ac71feeed2fa","Type":"ContainerStarted","Data":"e40c7e37a6ad2b0eb52aaa2d220657ba05a6e9f8d20706c344874eae7e5c2201"} Dec 12 00:22:24 crc kubenswrapper[4917]: I1212 00:22:24.600366 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=32.708356295 podStartE2EDuration="39.600344606s" podCreationTimestamp="2025-12-12 00:21:45 +0000 UTC" firstStartedPulling="2025-12-12 00:22:03.810316505 +0000 UTC m=+958.588117318" lastFinishedPulling="2025-12-12 00:22:10.702304816 +0000 UTC m=+965.480105629" observedRunningTime="2025-12-12 00:22:24.597221024 +0000 UTC m=+979.375021847" watchObservedRunningTime="2025-12-12 00:22:24.600344606 +0000 UTC m=+979.378145439" Dec 12 00:22:29 crc kubenswrapper[4917]: I1212 00:22:29.638952 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:22:29 crc kubenswrapper[4917]: I1212 00:22:29.639697 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:22:31 crc kubenswrapper[4917]: I1212 00:22:31.628879 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-kxvnb"] Dec 12 00:22:31 crc kubenswrapper[4917]: I1212 00:22:31.629791 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-kxvnb" Dec 12 00:22:31 crc kubenswrapper[4917]: I1212 00:22:31.632130 4917 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lx8mf" Dec 12 00:22:31 crc kubenswrapper[4917]: I1212 00:22:31.635062 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-kxvnb"] Dec 12 00:22:31 crc kubenswrapper[4917]: I1212 00:22:31.769291 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/659b4365-0d0c-412c-b1a2-1712298ec896-bound-sa-token\") pod \"cert-manager-86cb77c54b-kxvnb\" (UID: \"659b4365-0d0c-412c-b1a2-1712298ec896\") " pod="cert-manager/cert-manager-86cb77c54b-kxvnb" Dec 12 00:22:31 crc kubenswrapper[4917]: I1212 00:22:31.769409 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t66rn\" (UniqueName: \"kubernetes.io/projected/659b4365-0d0c-412c-b1a2-1712298ec896-kube-api-access-t66rn\") pod \"cert-manager-86cb77c54b-kxvnb\" (UID: \"659b4365-0d0c-412c-b1a2-1712298ec896\") " pod="cert-manager/cert-manager-86cb77c54b-kxvnb" Dec 12 00:22:31 crc kubenswrapper[4917]: I1212 00:22:31.871191 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t66rn\" (UniqueName: \"kubernetes.io/projected/659b4365-0d0c-412c-b1a2-1712298ec896-kube-api-access-t66rn\") pod \"cert-manager-86cb77c54b-kxvnb\" (UID: \"659b4365-0d0c-412c-b1a2-1712298ec896\") " pod="cert-manager/cert-manager-86cb77c54b-kxvnb" Dec 12 00:22:31 crc kubenswrapper[4917]: I1212 00:22:31.871325 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/659b4365-0d0c-412c-b1a2-1712298ec896-bound-sa-token\") pod \"cert-manager-86cb77c54b-kxvnb\" (UID: \"659b4365-0d0c-412c-b1a2-1712298ec896\") " pod="cert-manager/cert-manager-86cb77c54b-kxvnb" Dec 12 00:22:31 crc kubenswrapper[4917]: I1212 00:22:31.892158 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/659b4365-0d0c-412c-b1a2-1712298ec896-bound-sa-token\") pod \"cert-manager-86cb77c54b-kxvnb\" (UID: \"659b4365-0d0c-412c-b1a2-1712298ec896\") " pod="cert-manager/cert-manager-86cb77c54b-kxvnb" Dec 12 00:22:31 crc kubenswrapper[4917]: I1212 00:22:31.894688 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t66rn\" (UniqueName: \"kubernetes.io/projected/659b4365-0d0c-412c-b1a2-1712298ec896-kube-api-access-t66rn\") pod \"cert-manager-86cb77c54b-kxvnb\" (UID: \"659b4365-0d0c-412c-b1a2-1712298ec896\") " pod="cert-manager/cert-manager-86cb77c54b-kxvnb" Dec 12 00:22:31 crc kubenswrapper[4917]: I1212 00:22:31.953842 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-kxvnb" Dec 12 00:22:54 crc kubenswrapper[4917]: E1212 00:22:54.179203 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Dec 12 00:22:54 crc kubenswrapper[4917]: E1212 00:22:54.180211 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zttpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-f4fb5df64-nvcnr_cert-manager(6c7107a1-63ed-46dd-b721-8eccfd351246): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:22:54 crc kubenswrapper[4917]: E1212 00:22:54.181634 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" podUID="6c7107a1-63ed-46dd-b721-8eccfd351246" Dec 12 00:22:54 crc kubenswrapper[4917]: E1212 00:22:54.242033 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Dec 12 00:22:54 crc kubenswrapper[4917]: E1212 00:22:54.242513 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-cainjector,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/cainjector/cainjector],Args:[--leader-election-namespace=kube-system --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vv7n4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-cainjector-855d9ccff4-2tfk2_cert-manager(5772c7d5-7528-4639-9885-ac71feeed2fa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 00:22:54 crc kubenswrapper[4917]: E1212 00:22:54.243910 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" podUID="5772c7d5-7528-4639-9885-ac71feeed2fa" Dec 12 00:22:54 crc kubenswrapper[4917]: E1212 00:22:54.465630 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" podUID="6c7107a1-63ed-46dd-b721-8eccfd351246" Dec 12 00:22:54 crc kubenswrapper[4917]: E1212 00:22:54.465911 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" podUID="5772c7d5-7528-4639-9885-ac71feeed2fa" Dec 12 00:22:54 crc kubenswrapper[4917]: I1212 00:22:54.635932 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-kxvnb"] Dec 12 00:22:54 crc kubenswrapper[4917]: W1212 00:22:54.640704 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod659b4365_0d0c_412c_b1a2_1712298ec896.slice/crio-d67622560ddb4d4821af26e22e3fab10fa9541629c2b962c010c107a88164fb8 WatchSource:0}: Error finding container d67622560ddb4d4821af26e22e3fab10fa9541629c2b962c010c107a88164fb8: Status 404 returned error can't find the container with id d67622560ddb4d4821af26e22e3fab10fa9541629c2b962c010c107a88164fb8 Dec 12 00:22:55 crc kubenswrapper[4917]: I1212 00:22:55.472528 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"6bda0b57-4afb-46a0-a754-2efd6aaa2a95","Type":"ContainerStarted","Data":"be3045bb97d46f19445c01d353f52b216eab9445f1e117e15ae1b3c54a4b267f"} Dec 12 00:22:55 crc kubenswrapper[4917]: I1212 00:22:55.474084 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-kxvnb" event={"ID":"659b4365-0d0c-412c-b1a2-1712298ec896","Type":"ContainerStarted","Data":"d67622560ddb4d4821af26e22e3fab10fa9541629c2b962c010c107a88164fb8"} Dec 12 00:22:59 crc kubenswrapper[4917]: I1212 00:22:59.639187 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:22:59 crc kubenswrapper[4917]: I1212 00:22:59.639540 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:22:59 crc kubenswrapper[4917]: I1212 00:22:59.639601 4917 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:22:59 crc kubenswrapper[4917]: I1212 00:22:59.640211 4917 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"810e6b0f2d007409666f46f2e8eac0cefab026671305efff02967dc13a6c6eec"} pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 00:22:59 crc kubenswrapper[4917]: I1212 00:22:59.640254 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" containerID="cri-o://810e6b0f2d007409666f46f2e8eac0cefab026671305efff02967dc13a6c6eec" gracePeriod=600 Dec 12 00:23:02 crc kubenswrapper[4917]: I1212 00:23:02.542880 4917 generic.go:334] "Generic (PLEG): container finished" podID="6bda0b57-4afb-46a0-a754-2efd6aaa2a95" containerID="be3045bb97d46f19445c01d353f52b216eab9445f1e117e15ae1b3c54a4b267f" exitCode=0 Dec 12 00:23:02 crc kubenswrapper[4917]: I1212 00:23:02.542979 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"6bda0b57-4afb-46a0-a754-2efd6aaa2a95","Type":"ContainerDied","Data":"be3045bb97d46f19445c01d353f52b216eab9445f1e117e15ae1b3c54a4b267f"} Dec 12 00:23:02 crc kubenswrapper[4917]: I1212 00:23:02.547994 4917 generic.go:334] "Generic (PLEG): container finished" podID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerID="810e6b0f2d007409666f46f2e8eac0cefab026671305efff02967dc13a6c6eec" exitCode=0 Dec 12 00:23:02 crc kubenswrapper[4917]: I1212 00:23:02.548026 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerDied","Data":"810e6b0f2d007409666f46f2e8eac0cefab026671305efff02967dc13a6c6eec"} Dec 12 00:23:02 crc kubenswrapper[4917]: I1212 00:23:02.548051 4917 scope.go:117] "RemoveContainer" containerID="96ec4d8e61f5bcbe03d7050d140b399f2045053de88e96d003dcf4d699ca9b59" Dec 12 00:23:03 crc kubenswrapper[4917]: I1212 00:23:03.815097 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"dbf52aec17a34453ffa3035aa47c1f1caf454e50712594c9e392bfa350bed490"} Dec 12 00:23:03 crc kubenswrapper[4917]: I1212 00:23:03.817529 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-kxvnb" event={"ID":"659b4365-0d0c-412c-b1a2-1712298ec896","Type":"ContainerStarted","Data":"dad4396c46e50a2f0b78c9b5cc888650017857bf0053e98cc73cececa3528c6d"} Dec 12 00:23:03 crc kubenswrapper[4917]: I1212 00:23:03.828698 4917 generic.go:334] "Generic (PLEG): container finished" podID="6bda0b57-4afb-46a0-a754-2efd6aaa2a95" containerID="a50c5f37e0a3aba35df5baed424e8688fa7cfbaaa40bf9cb5baaf658c274be21" exitCode=0 Dec 12 00:23:03 crc kubenswrapper[4917]: I1212 00:23:03.828756 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"6bda0b57-4afb-46a0-a754-2efd6aaa2a95","Type":"ContainerDied","Data":"a50c5f37e0a3aba35df5baed424e8688fa7cfbaaa40bf9cb5baaf658c274be21"} Dec 12 00:23:03 crc kubenswrapper[4917]: I1212 00:23:03.927132 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-kxvnb" podStartSLOduration=24.609702937 podStartE2EDuration="32.927112944s" podCreationTimestamp="2025-12-12 00:22:31 +0000 UTC" firstStartedPulling="2025-12-12 00:22:54.643355851 +0000 UTC m=+1009.421156664" lastFinishedPulling="2025-12-12 00:23:02.960765858 +0000 UTC m=+1017.738566671" observedRunningTime="2025-12-12 00:23:03.91897114 +0000 UTC m=+1018.696771973" watchObservedRunningTime="2025-12-12 00:23:03.927112944 +0000 UTC m=+1018.704913767" Dec 12 00:23:05 crc kubenswrapper[4917]: I1212 00:23:04.838977 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"6bda0b57-4afb-46a0-a754-2efd6aaa2a95","Type":"ContainerStarted","Data":"25e2110c2ea65c375b05bb0a2d049324aa8529faa15bd0563432584e6a5aa1c9"} Dec 12 00:23:05 crc kubenswrapper[4917]: I1212 00:23:04.839743 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:23:05 crc kubenswrapper[4917]: I1212 00:23:04.893408 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=12.286789442 podStartE2EDuration="1m45.893381328s" podCreationTimestamp="2025-12-12 00:21:19 +0000 UTC" firstStartedPulling="2025-12-12 00:21:20.60273321 +0000 UTC m=+915.380534023" lastFinishedPulling="2025-12-12 00:22:54.209325096 +0000 UTC m=+1008.987125909" observedRunningTime="2025-12-12 00:23:04.880052088 +0000 UTC m=+1019.657852921" watchObservedRunningTime="2025-12-12 00:23:04.893381328 +0000 UTC m=+1019.671182131" Dec 12 00:23:10 crc kubenswrapper[4917]: I1212 00:23:10.000277 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" event={"ID":"6c7107a1-63ed-46dd-b721-8eccfd351246","Type":"ContainerStarted","Data":"f85dcbb5b8ab8701e635ff52df042d17c147bb6b7def38024440cad1756b953f"} Dec 12 00:23:10 crc kubenswrapper[4917]: I1212 00:23:10.001332 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" Dec 12 00:23:10 crc kubenswrapper[4917]: I1212 00:23:10.006787 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" event={"ID":"5772c7d5-7528-4639-9885-ac71feeed2fa","Type":"ContainerStarted","Data":"9d776d20b48f46a13c6259f3025d5670e2fdaed0e53371c6d5e45756e8a6fb93"} Dec 12 00:23:10 crc kubenswrapper[4917]: I1212 00:23:10.018554 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" podStartSLOduration=-9223371986.836245 podStartE2EDuration="50.018531359s" podCreationTimestamp="2025-12-12 00:22:20 +0000 UTC" firstStartedPulling="2025-12-12 00:22:21.721463815 +0000 UTC m=+976.499264628" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:23:10.016027144 +0000 UTC m=+1024.793827957" watchObservedRunningTime="2025-12-12 00:23:10.018531359 +0000 UTC m=+1024.796332172" Dec 12 00:23:15 crc kubenswrapper[4917]: I1212 00:23:15.246300 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="6bda0b57-4afb-46a0-a754-2efd6aaa2a95" containerName="elasticsearch" probeResult="failure" output=< Dec 12 00:23:15 crc kubenswrapper[4917]: {"timestamp": "2025-12-12T00:23:15+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 00:23:15 crc kubenswrapper[4917]: > Dec 12 00:23:16 crc kubenswrapper[4917]: I1212 00:23:16.327209 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-nvcnr" Dec 12 00:23:16 crc kubenswrapper[4917]: I1212 00:23:16.352817 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2tfk2" podStartSLOduration=-9223371982.501982 podStartE2EDuration="54.352794268s" podCreationTimestamp="2025-12-12 00:22:22 +0000 UTC" firstStartedPulling="2025-12-12 00:22:23.908231157 +0000 UTC m=+978.686031970" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:23:10.028992723 +0000 UTC m=+1024.806793556" watchObservedRunningTime="2025-12-12 00:23:16.352794268 +0000 UTC m=+1031.130595081" Dec 12 00:23:20 crc kubenswrapper[4917]: I1212 00:23:20.198620 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="6bda0b57-4afb-46a0-a754-2efd6aaa2a95" containerName="elasticsearch" probeResult="failure" output=< Dec 12 00:23:20 crc kubenswrapper[4917]: {"timestamp": "2025-12-12T00:23:20+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 00:23:20 crc kubenswrapper[4917]: > Dec 12 00:23:25 crc kubenswrapper[4917]: I1212 00:23:25.217490 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="6bda0b57-4afb-46a0-a754-2efd6aaa2a95" containerName="elasticsearch" probeResult="failure" output=< Dec 12 00:23:25 crc kubenswrapper[4917]: {"timestamp": "2025-12-12T00:23:25+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 00:23:25 crc kubenswrapper[4917]: > Dec 12 00:23:30 crc kubenswrapper[4917]: I1212 00:23:30.175550 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="6bda0b57-4afb-46a0-a754-2efd6aaa2a95" containerName="elasticsearch" probeResult="failure" output=< Dec 12 00:23:30 crc kubenswrapper[4917]: {"timestamp": "2025-12-12T00:23:30+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 00:23:30 crc kubenswrapper[4917]: > Dec 12 00:23:35 crc kubenswrapper[4917]: I1212 00:23:35.656338 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 00:24:46 crc kubenswrapper[4917]: I1212 00:24:46.789122 4917 patch_prober.go:28] interesting pod/controller-manager-7d6454f4c9-xzc72 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 00:24:46 crc kubenswrapper[4917]: I1212 00:24:46.791209 4917 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" podUID="fef07310-5c97-491c-891d-e7de6a0c9597" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 00:24:46 crc kubenswrapper[4917]: I1212 00:24:46.835516 4917 patch_prober.go:28] interesting pod/controller-manager-7d6454f4c9-xzc72 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 00:24:46 crc kubenswrapper[4917]: I1212 00:24:46.835625 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7d6454f4c9-xzc72" podUID="fef07310-5c97-491c-891d-e7de6a0c9597" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 00:24:46 crc kubenswrapper[4917]: E1212 00:24:46.836994 4917 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.223s" Dec 12 00:25:29 crc kubenswrapper[4917]: I1212 00:25:29.639395 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:25:29 crc kubenswrapper[4917]: I1212 00:25:29.640025 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:25:35 crc kubenswrapper[4917]: I1212 00:25:35.826836 4917 generic.go:334] "Generic (PLEG): container finished" podID="17805f64-0014-4f94-b468-7c869f8a62fe" containerID="8e5d68579cc3390e96aa35b1af058eb7843324d3491f512614ad99e0bc0688cd" exitCode=0 Dec 12 00:25:35 crc kubenswrapper[4917]: I1212 00:25:35.827921 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"17805f64-0014-4f94-b468-7c869f8a62fe","Type":"ContainerDied","Data":"8e5d68579cc3390e96aa35b1af058eb7843324d3491f512614ad99e0bc0688cd"} Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.098132 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.140943 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-push\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.140998 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-buildcachedir\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141039 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-ca-bundles\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141068 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-build-blob-cache\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141114 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-proxy-ca-bundles\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141135 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-node-pullsecrets\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141174 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-root\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141195 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-pull\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141221 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-buildworkdir\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141200 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141262 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gljn9\" (UniqueName: \"kubernetes.io/projected/17805f64-0014-4f94-b468-7c869f8a62fe-kube-api-access-gljn9\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141347 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-system-configs\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141380 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-run\") pod \"17805f64-0014-4f94-b468-7c869f8a62fe\" (UID: \"17805f64-0014-4f94-b468-7c869f8a62fe\") " Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.141767 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.142939 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.145237 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.145796 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.146416 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.147933 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.164894 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.164930 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.165048 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17805f64-0014-4f94-b468-7c869f8a62fe-kube-api-access-gljn9" (OuterVolumeSpecName: "kube-api-access-gljn9") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "kube-api-access-gljn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.184705 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.243712 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.243751 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.243761 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.243772 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/17805f64-0014-4f94-b468-7c869f8a62fe-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.243782 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/17805f64-0014-4f94-b468-7c869f8a62fe-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.243791 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.243802 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gljn9\" (UniqueName: \"kubernetes.io/projected/17805f64-0014-4f94-b468-7c869f8a62fe-kube-api-access-gljn9\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.243815 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/17805f64-0014-4f94-b468-7c869f8a62fe-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.243826 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.359024 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.447079 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.849679 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"17805f64-0014-4f94-b468-7c869f8a62fe","Type":"ContainerDied","Data":"17506af75a1b43312d1d55c7b33f77cf8e844bdbae914f33605503ac922adc3a"} Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.849751 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17506af75a1b43312d1d55c7b33f77cf8e844bdbae914f33605503ac922adc3a" Dec 12 00:25:37 crc kubenswrapper[4917]: I1212 00:25:37.849789 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 12 00:25:39 crc kubenswrapper[4917]: I1212 00:25:39.692743 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "17805f64-0014-4f94-b468-7c869f8a62fe" (UID: "17805f64-0014-4f94-b468-7c869f8a62fe"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:25:39 crc kubenswrapper[4917]: I1212 00:25:39.783296 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/17805f64-0014-4f94-b468-7c869f8a62fe-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.263559 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 12 00:25:45 crc kubenswrapper[4917]: E1212 00:25:45.263931 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17805f64-0014-4f94-b468-7c869f8a62fe" containerName="docker-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.263950 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="17805f64-0014-4f94-b468-7c869f8a62fe" containerName="docker-build" Dec 12 00:25:45 crc kubenswrapper[4917]: E1212 00:25:45.263967 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17805f64-0014-4f94-b468-7c869f8a62fe" containerName="manage-dockerfile" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.263974 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="17805f64-0014-4f94-b468-7c869f8a62fe" containerName="manage-dockerfile" Dec 12 00:25:45 crc kubenswrapper[4917]: E1212 00:25:45.263984 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17805f64-0014-4f94-b468-7c869f8a62fe" containerName="git-clone" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.263993 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="17805f64-0014-4f94-b468-7c869f8a62fe" containerName="git-clone" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.264123 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="17805f64-0014-4f94-b468-7c869f8a62fe" containerName="docker-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.264867 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.268249 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.268309 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.268754 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.270428 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-tfjwq" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.285994 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.361823 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.361889 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.361918 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.361981 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.362044 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.362105 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.362131 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.362167 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.362343 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.362377 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.362397 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.362429 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp786\" (UniqueName: \"kubernetes.io/projected/f57c2963-2b6a-4c67-984d-e33893c7b244-kube-api-access-wp786\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464029 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464099 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464120 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464154 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464191 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464229 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464252 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464280 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464303 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464329 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464354 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464382 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp786\" (UniqueName: \"kubernetes.io/projected/f57c2963-2b6a-4c67-984d-e33893c7b244-kube-api-access-wp786\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464486 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464800 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.464895 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.465136 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.465222 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.465775 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.465791 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.465967 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.466607 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.472518 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.473138 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.487260 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp786\" (UniqueName: \"kubernetes.io/projected/f57c2963-2b6a-4c67-984d-e33893c7b244-kube-api-access-wp786\") pod \"smart-gateway-operator-1-build\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.591673 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.864298 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 12 00:25:45 crc kubenswrapper[4917]: W1212 00:25:45.866550 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf57c2963_2b6a_4c67_984d_e33893c7b244.slice/crio-9698cf77bf9c24019f05d24f83adb4b2ea874d56640cefbbaab8965ddef00ae5 WatchSource:0}: Error finding container 9698cf77bf9c24019f05d24f83adb4b2ea874d56640cefbbaab8965ddef00ae5: Status 404 returned error can't find the container with id 9698cf77bf9c24019f05d24f83adb4b2ea874d56640cefbbaab8965ddef00ae5 Dec 12 00:25:45 crc kubenswrapper[4917]: I1212 00:25:45.908318 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"f57c2963-2b6a-4c67-984d-e33893c7b244","Type":"ContainerStarted","Data":"9698cf77bf9c24019f05d24f83adb4b2ea874d56640cefbbaab8965ddef00ae5"} Dec 12 00:25:47 crc kubenswrapper[4917]: I1212 00:25:47.926910 4917 generic.go:334] "Generic (PLEG): container finished" podID="f57c2963-2b6a-4c67-984d-e33893c7b244" containerID="37e5f89b6ec512b813b004160ff875d886ecb522b8a967bda92722c7f0a5f099" exitCode=0 Dec 12 00:25:47 crc kubenswrapper[4917]: I1212 00:25:47.927005 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"f57c2963-2b6a-4c67-984d-e33893c7b244","Type":"ContainerDied","Data":"37e5f89b6ec512b813b004160ff875d886ecb522b8a967bda92722c7f0a5f099"} Dec 12 00:25:48 crc kubenswrapper[4917]: I1212 00:25:48.940932 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"f57c2963-2b6a-4c67-984d-e33893c7b244","Type":"ContainerStarted","Data":"3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b"} Dec 12 00:25:48 crc kubenswrapper[4917]: I1212 00:25:48.985497 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.985468084 podStartE2EDuration="3.985468084s" podCreationTimestamp="2025-12-12 00:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:25:48.978800139 +0000 UTC m=+1183.756600972" watchObservedRunningTime="2025-12-12 00:25:48.985468084 +0000 UTC m=+1183.763268907" Dec 12 00:25:56 crc kubenswrapper[4917]: I1212 00:25:56.158872 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 12 00:25:56 crc kubenswrapper[4917]: I1212 00:25:56.160091 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="f57c2963-2b6a-4c67-984d-e33893c7b244" containerName="docker-build" containerID="cri-o://3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b" gracePeriod=30 Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.663459 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_f57c2963-2b6a-4c67-984d-e33893c7b244/docker-build/0.log" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.664569 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.779625 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-push\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.779792 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-node-pullsecrets\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.779832 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-run\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.779881 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-buildworkdir\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.779945 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-build-blob-cache\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.779962 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.780021 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-proxy-ca-bundles\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.780061 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp786\" (UniqueName: \"kubernetes.io/projected/f57c2963-2b6a-4c67-984d-e33893c7b244-kube-api-access-wp786\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.780129 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-pull\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.780197 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-buildcachedir\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.780254 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-root\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.780303 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-system-configs\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.780364 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-ca-bundles\") pod \"f57c2963-2b6a-4c67-984d-e33893c7b244\" (UID: \"f57c2963-2b6a-4c67-984d-e33893c7b244\") " Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.780406 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.781047 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.781069 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f57c2963-2b6a-4c67-984d-e33893c7b244-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.781123 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.781711 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.781889 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.782846 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.783254 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.793779 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.795757 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.796055 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f57c2963-2b6a-4c67-984d-e33893c7b244-kube-api-access-wp786" (OuterVolumeSpecName: "kube-api-access-wp786") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "kube-api-access-wp786". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.834332 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 12 00:25:57 crc kubenswrapper[4917]: E1212 00:25:57.834980 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57c2963-2b6a-4c67-984d-e33893c7b244" containerName="manage-dockerfile" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.835007 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57c2963-2b6a-4c67-984d-e33893c7b244" containerName="manage-dockerfile" Dec 12 00:25:57 crc kubenswrapper[4917]: E1212 00:25:57.835028 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57c2963-2b6a-4c67-984d-e33893c7b244" containerName="docker-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.835037 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57c2963-2b6a-4c67-984d-e33893c7b244" containerName="docker-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.835212 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f57c2963-2b6a-4c67-984d-e33893c7b244" containerName="docker-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.837320 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.840141 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.841038 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.842335 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.864516 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.883413 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.883575 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.883620 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.883687 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.883718 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.883788 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.883873 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.883899 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.883960 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgzc4\" (UniqueName: \"kubernetes.io/projected/4002e99e-6305-4211-8e86-525a001ca6b9-kube-api-access-vgzc4\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.883984 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.884113 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.884334 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.884547 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.884562 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.884576 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.884588 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.884599 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.884609 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp786\" (UniqueName: \"kubernetes.io/projected/f57c2963-2b6a-4c67-984d-e33893c7b244-kube-api-access-wp786\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.884618 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/f57c2963-2b6a-4c67-984d-e33893c7b244-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.884630 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f57c2963-2b6a-4c67-984d-e33893c7b244-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986460 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986513 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986541 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986560 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986582 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986621 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986711 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986748 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986808 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgzc4\" (UniqueName: \"kubernetes.io/projected/4002e99e-6305-4211-8e86-525a001ca6b9-kube-api-access-vgzc4\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986834 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986860 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.986886 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.987421 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.987744 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.988066 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.988132 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.988195 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.988452 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.988549 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.988593 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.989552 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.994481 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:57 crc kubenswrapper[4917]: I1212 00:25:57.994917 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.021542 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_f57c2963-2b6a-4c67-984d-e33893c7b244/docker-build/0.log" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.022561 4917 generic.go:334] "Generic (PLEG): container finished" podID="f57c2963-2b6a-4c67-984d-e33893c7b244" containerID="3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b" exitCode=1 Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.022625 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"f57c2963-2b6a-4c67-984d-e33893c7b244","Type":"ContainerDied","Data":"3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b"} Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.022693 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"f57c2963-2b6a-4c67-984d-e33893c7b244","Type":"ContainerDied","Data":"9698cf77bf9c24019f05d24f83adb4b2ea874d56640cefbbaab8965ddef00ae5"} Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.022717 4917 scope.go:117] "RemoveContainer" containerID="3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.022890 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.024385 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgzc4\" (UniqueName: \"kubernetes.io/projected/4002e99e-6305-4211-8e86-525a001ca6b9-kube-api-access-vgzc4\") pod \"smart-gateway-operator-2-build\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.044389 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.088440 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.115273 4917 scope.go:117] "RemoveContainer" containerID="37e5f89b6ec512b813b004160ff875d886ecb522b8a967bda92722c7f0a5f099" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.168802 4917 scope.go:117] "RemoveContainer" containerID="3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b" Dec 12 00:25:58 crc kubenswrapper[4917]: E1212 00:25:58.169823 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b\": container with ID starting with 3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b not found: ID does not exist" containerID="3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.169985 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b"} err="failed to get container status \"3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b\": rpc error: code = NotFound desc = could not find container \"3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b\": container with ID starting with 3cc7150aa84e22b2fdc7596c0179829e6a20b955d43f8373c75523ab02e8581b not found: ID does not exist" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.170103 4917 scope.go:117] "RemoveContainer" containerID="37e5f89b6ec512b813b004160ff875d886ecb522b8a967bda92722c7f0a5f099" Dec 12 00:25:58 crc kubenswrapper[4917]: E1212 00:25:58.170848 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e5f89b6ec512b813b004160ff875d886ecb522b8a967bda92722c7f0a5f099\": container with ID starting with 37e5f89b6ec512b813b004160ff875d886ecb522b8a967bda92722c7f0a5f099 not found: ID does not exist" containerID="37e5f89b6ec512b813b004160ff875d886ecb522b8a967bda92722c7f0a5f099" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.170916 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e5f89b6ec512b813b004160ff875d886ecb522b8a967bda92722c7f0a5f099"} err="failed to get container status \"37e5f89b6ec512b813b004160ff875d886ecb522b8a967bda92722c7f0a5f099\": rpc error: code = NotFound desc = could not find container \"37e5f89b6ec512b813b004160ff875d886ecb522b8a967bda92722c7f0a5f099\": container with ID starting with 37e5f89b6ec512b813b004160ff875d886ecb522b8a967bda92722c7f0a5f099 not found: ID does not exist" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.176720 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.249075 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f57c2963-2b6a-4c67-984d-e33893c7b244" (UID: "f57c2963-2b6a-4c67-984d-e33893c7b244"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.292620 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f57c2963-2b6a-4c67-984d-e33893c7b244-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.382233 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.387174 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 12 00:25:58 crc kubenswrapper[4917]: I1212 00:25:58.414528 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 12 00:25:59 crc kubenswrapper[4917]: I1212 00:25:59.035148 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4002e99e-6305-4211-8e86-525a001ca6b9","Type":"ContainerStarted","Data":"fa8f8dd46edb42e735d63e15cab840393b270bba596c7be84b7abe06faf7f8c1"} Dec 12 00:25:59 crc kubenswrapper[4917]: I1212 00:25:59.035923 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4002e99e-6305-4211-8e86-525a001ca6b9","Type":"ContainerStarted","Data":"0275db67dd2dec7c4274927d37be576b6605e624ccd842dc27cff8fce74cf9b9"} Dec 12 00:25:59 crc kubenswrapper[4917]: I1212 00:25:59.620833 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f57c2963-2b6a-4c67-984d-e33893c7b244" path="/var/lib/kubelet/pods/f57c2963-2b6a-4c67-984d-e33893c7b244/volumes" Dec 12 00:25:59 crc kubenswrapper[4917]: I1212 00:25:59.639084 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:25:59 crc kubenswrapper[4917]: I1212 00:25:59.639222 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:26:00 crc kubenswrapper[4917]: I1212 00:26:00.042137 4917 generic.go:334] "Generic (PLEG): container finished" podID="4002e99e-6305-4211-8e86-525a001ca6b9" containerID="fa8f8dd46edb42e735d63e15cab840393b270bba596c7be84b7abe06faf7f8c1" exitCode=0 Dec 12 00:26:00 crc kubenswrapper[4917]: I1212 00:26:00.042200 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4002e99e-6305-4211-8e86-525a001ca6b9","Type":"ContainerDied","Data":"fa8f8dd46edb42e735d63e15cab840393b270bba596c7be84b7abe06faf7f8c1"} Dec 12 00:26:01 crc kubenswrapper[4917]: I1212 00:26:01.053466 4917 generic.go:334] "Generic (PLEG): container finished" podID="4002e99e-6305-4211-8e86-525a001ca6b9" containerID="258d95274f4c507ff82d2bf15e2603a1dc12963939110864c2eaf86560202464" exitCode=0 Dec 12 00:26:01 crc kubenswrapper[4917]: I1212 00:26:01.053965 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4002e99e-6305-4211-8e86-525a001ca6b9","Type":"ContainerDied","Data":"258d95274f4c507ff82d2bf15e2603a1dc12963939110864c2eaf86560202464"} Dec 12 00:26:01 crc kubenswrapper[4917]: I1212 00:26:01.110272 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_4002e99e-6305-4211-8e86-525a001ca6b9/manage-dockerfile/0.log" Dec 12 00:26:02 crc kubenswrapper[4917]: I1212 00:26:02.063348 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4002e99e-6305-4211-8e86-525a001ca6b9","Type":"ContainerStarted","Data":"6c0207fb48ffa9a0de319cd8f28e5a4bfaf9a43df7aa7bd8b16f3152e3fa1e7b"} Dec 12 00:26:02 crc kubenswrapper[4917]: I1212 00:26:02.100334 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.100310687 podStartE2EDuration="5.100310687s" podCreationTimestamp="2025-12-12 00:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:26:02.099570728 +0000 UTC m=+1196.877371561" watchObservedRunningTime="2025-12-12 00:26:02.100310687 +0000 UTC m=+1196.878111500" Dec 12 00:26:29 crc kubenswrapper[4917]: I1212 00:26:29.639793 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:26:29 crc kubenswrapper[4917]: I1212 00:26:29.640485 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:26:29 crc kubenswrapper[4917]: I1212 00:26:29.640545 4917 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:26:29 crc kubenswrapper[4917]: I1212 00:26:29.641360 4917 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dbf52aec17a34453ffa3035aa47c1f1caf454e50712594c9e392bfa350bed490"} pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 00:26:29 crc kubenswrapper[4917]: I1212 00:26:29.641431 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" containerID="cri-o://dbf52aec17a34453ffa3035aa47c1f1caf454e50712594c9e392bfa350bed490" gracePeriod=600 Dec 12 00:26:30 crc kubenswrapper[4917]: I1212 00:26:30.251111 4917 generic.go:334] "Generic (PLEG): container finished" podID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerID="dbf52aec17a34453ffa3035aa47c1f1caf454e50712594c9e392bfa350bed490" exitCode=0 Dec 12 00:26:30 crc kubenswrapper[4917]: I1212 00:26:30.251194 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerDied","Data":"dbf52aec17a34453ffa3035aa47c1f1caf454e50712594c9e392bfa350bed490"} Dec 12 00:26:30 crc kubenswrapper[4917]: I1212 00:26:30.251486 4917 scope.go:117] "RemoveContainer" containerID="810e6b0f2d007409666f46f2e8eac0cefab026671305efff02967dc13a6c6eec" Dec 12 00:26:31 crc kubenswrapper[4917]: I1212 00:26:31.259442 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"c8ec3b647c80a5906eabbf0fa7c865967e32995db04bf0586010c232bc53706c"} Dec 12 00:27:39 crc kubenswrapper[4917]: I1212 00:27:39.051562 4917 trace.go:236] Trace[1247217753]: "Calculate volume metrics of container-storage-root for pod service-telemetry/smart-gateway-operator-2-build" (12-Dec-2025 00:27:37.906) (total time: 1144ms): Dec 12 00:27:39 crc kubenswrapper[4917]: Trace[1247217753]: [1.144718135s] [1.144718135s] END Dec 12 00:27:47 crc kubenswrapper[4917]: I1212 00:27:47.138531 4917 generic.go:334] "Generic (PLEG): container finished" podID="4002e99e-6305-4211-8e86-525a001ca6b9" containerID="6c0207fb48ffa9a0de319cd8f28e5a4bfaf9a43df7aa7bd8b16f3152e3fa1e7b" exitCode=0 Dec 12 00:27:47 crc kubenswrapper[4917]: I1212 00:27:47.138620 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4002e99e-6305-4211-8e86-525a001ca6b9","Type":"ContainerDied","Data":"6c0207fb48ffa9a0de319cd8f28e5a4bfaf9a43df7aa7bd8b16f3152e3fa1e7b"} Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.411318 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.422797 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-node-pullsecrets\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.422905 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-system-configs\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.422959 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-buildworkdir\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.422979 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.423080 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.422998 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-buildcachedir\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.423272 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-push\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.423929 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgzc4\" (UniqueName: \"kubernetes.io/projected/4002e99e-6305-4211-8e86-525a001ca6b9-kube-api-access-vgzc4\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.424020 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-proxy-ca-bundles\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.424087 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-ca-bundles\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.424122 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-root\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.424151 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-run\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.424170 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-build-blob-cache\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.424268 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-pull\") pod \"4002e99e-6305-4211-8e86-525a001ca6b9\" (UID: \"4002e99e-6305-4211-8e86-525a001ca6b9\") " Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.424292 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.424886 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.424910 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.424923 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4002e99e-6305-4211-8e86-525a001ca6b9-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.425484 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.426570 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.426805 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.431316 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.431504 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.431590 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4002e99e-6305-4211-8e86-525a001ca6b9-kube-api-access-vgzc4" (OuterVolumeSpecName: "kube-api-access-vgzc4") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "kube-api-access-vgzc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.431700 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.527214 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.527783 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.527796 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/4002e99e-6305-4211-8e86-525a001ca6b9-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.527806 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgzc4\" (UniqueName: \"kubernetes.io/projected/4002e99e-6305-4211-8e86-525a001ca6b9-kube-api-access-vgzc4\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.527817 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.527827 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4002e99e-6305-4211-8e86-525a001ca6b9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.527835 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.639356 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:27:48 crc kubenswrapper[4917]: I1212 00:27:48.731401 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:49 crc kubenswrapper[4917]: I1212 00:27:49.162355 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4002e99e-6305-4211-8e86-525a001ca6b9","Type":"ContainerDied","Data":"0275db67dd2dec7c4274927d37be576b6605e624ccd842dc27cff8fce74cf9b9"} Dec 12 00:27:49 crc kubenswrapper[4917]: I1212 00:27:49.162433 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0275db67dd2dec7c4274927d37be576b6605e624ccd842dc27cff8fce74cf9b9" Dec 12 00:27:49 crc kubenswrapper[4917]: I1212 00:27:49.162565 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 12 00:27:50 crc kubenswrapper[4917]: I1212 00:27:50.940518 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4002e99e-6305-4211-8e86-525a001ca6b9" (UID: "4002e99e-6305-4211-8e86-525a001ca6b9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:27:50 crc kubenswrapper[4917]: I1212 00:27:50.976686 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4002e99e-6305-4211-8e86-525a001ca6b9-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.646507 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 12 00:27:53 crc kubenswrapper[4917]: E1212 00:27:53.647698 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4002e99e-6305-4211-8e86-525a001ca6b9" containerName="git-clone" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.647721 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="4002e99e-6305-4211-8e86-525a001ca6b9" containerName="git-clone" Dec 12 00:27:53 crc kubenswrapper[4917]: E1212 00:27:53.647749 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4002e99e-6305-4211-8e86-525a001ca6b9" containerName="docker-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.647756 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="4002e99e-6305-4211-8e86-525a001ca6b9" containerName="docker-build" Dec 12 00:27:53 crc kubenswrapper[4917]: E1212 00:27:53.647772 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4002e99e-6305-4211-8e86-525a001ca6b9" containerName="manage-dockerfile" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.647783 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="4002e99e-6305-4211-8e86-525a001ca6b9" containerName="manage-dockerfile" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.647969 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="4002e99e-6305-4211-8e86-525a001ca6b9" containerName="docker-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.648915 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.651272 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-tfjwq" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.651279 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.651765 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.652352 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.676533 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.731396 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.731485 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-push\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.731521 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-system-configs\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.731553 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-buildworkdir\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.731583 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-buildcachedir\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.731603 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-root\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.731760 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-pull\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.731996 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmfgt\" (UniqueName: \"kubernetes.io/projected/1111984c-cedd-4166-9b47-ebd65d901786-kube-api-access-mmfgt\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.732045 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-run\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.732093 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.732197 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.732253 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.833966 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-root\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834040 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-buildcachedir\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834072 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-pull\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834129 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmfgt\" (UniqueName: \"kubernetes.io/projected/1111984c-cedd-4166-9b47-ebd65d901786-kube-api-access-mmfgt\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834157 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-run\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834183 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834215 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834246 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834289 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834326 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-push\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834355 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-system-configs\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834381 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-buildworkdir\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834886 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-buildworkdir\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834920 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.834942 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-root\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.835492 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-run\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.835586 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-buildcachedir\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.837565 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-system-configs\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.837975 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:53 crc kubenswrapper[4917]: I1212 00:27:53.839930 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:54 crc kubenswrapper[4917]: I1212 00:27:54.004932 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:54 crc kubenswrapper[4917]: I1212 00:27:54.005648 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-pull\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:54 crc kubenswrapper[4917]: I1212 00:27:54.005834 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-push\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:54 crc kubenswrapper[4917]: I1212 00:27:54.008477 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmfgt\" (UniqueName: \"kubernetes.io/projected/1111984c-cedd-4166-9b47-ebd65d901786-kube-api-access-mmfgt\") pod \"sg-core-1-build\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " pod="service-telemetry/sg-core-1-build" Dec 12 00:27:54 crc kubenswrapper[4917]: I1212 00:27:54.268488 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 12 00:27:54 crc kubenswrapper[4917]: I1212 00:27:54.662636 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 12 00:27:55 crc kubenswrapper[4917]: I1212 00:27:55.214192 4917 generic.go:334] "Generic (PLEG): container finished" podID="1111984c-cedd-4166-9b47-ebd65d901786" containerID="b1b16fa767c4994cc56d83fcd160bad27d2df72a459e6804b11a065fa3501061" exitCode=0 Dec 12 00:27:55 crc kubenswrapper[4917]: I1212 00:27:55.214264 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1111984c-cedd-4166-9b47-ebd65d901786","Type":"ContainerDied","Data":"b1b16fa767c4994cc56d83fcd160bad27d2df72a459e6804b11a065fa3501061"} Dec 12 00:27:55 crc kubenswrapper[4917]: I1212 00:27:55.214321 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1111984c-cedd-4166-9b47-ebd65d901786","Type":"ContainerStarted","Data":"1ea7a021e8648a22b8155a018cecf29db3a0fcdacf4bfcd4f612c452dbe121a8"} Dec 12 00:27:56 crc kubenswrapper[4917]: I1212 00:27:56.223815 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1111984c-cedd-4166-9b47-ebd65d901786","Type":"ContainerStarted","Data":"59f2d40a89b007efbaa5c9e1543a1beb8c828072577389de1eb98b15e572960e"} Dec 12 00:27:56 crc kubenswrapper[4917]: I1212 00:27:56.255208 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.255175996 podStartE2EDuration="3.255175996s" podCreationTimestamp="2025-12-12 00:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:27:56.249243142 +0000 UTC m=+1311.027043965" watchObservedRunningTime="2025-12-12 00:27:56.255175996 +0000 UTC m=+1311.032976809" Dec 12 00:28:04 crc kubenswrapper[4917]: I1212 00:28:04.029906 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 12 00:28:04 crc kubenswrapper[4917]: I1212 00:28:04.031400 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="1111984c-cedd-4166-9b47-ebd65d901786" containerName="docker-build" containerID="cri-o://59f2d40a89b007efbaa5c9e1543a1beb8c828072577389de1eb98b15e572960e" gracePeriod=30 Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.295561 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.298188 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.301891 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.301909 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.302247 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.313953 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.353778 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-run\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.354274 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqvfv\" (UniqueName: \"kubernetes.io/projected/641f61fa-3761-480d-9d1b-f5a10f5b62f4-kube-api-access-zqvfv\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.354376 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-push\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.354534 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-pull\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.354754 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-system-configs\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.354886 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.355025 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildworkdir\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.355175 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildcachedir\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.355317 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.355453 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-root\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.355743 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.355869 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457153 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457231 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildworkdir\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457263 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildcachedir\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457295 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457343 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-root\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457386 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457421 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457465 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-run\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457588 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqvfv\" (UniqueName: \"kubernetes.io/projected/641f61fa-3761-480d-9d1b-f5a10f5b62f4-kube-api-access-zqvfv\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457612 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-push\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457724 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-pull\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.457749 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-system-configs\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.458212 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildcachedir\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.458375 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-system-configs\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.458505 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.458731 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildworkdir\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.458801 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-run\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.458931 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.458978 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-root\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.459118 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.459059 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.473496 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-push\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.475869 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqvfv\" (UniqueName: \"kubernetes.io/projected/641f61fa-3761-480d-9d1b-f5a10f5b62f4-kube-api-access-zqvfv\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.477103 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-pull\") pod \"sg-core-2-build\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.619609 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 12 00:28:06 crc kubenswrapper[4917]: I1212 00:28:06.904751 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 12 00:28:07 crc kubenswrapper[4917]: I1212 00:28:07.320272 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"641f61fa-3761-480d-9d1b-f5a10f5b62f4","Type":"ContainerStarted","Data":"dfd720f16ea31b39d4379e5a7d49ab1557cc25254ad9538c7b78cab91b7447e0"} Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.723408 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_1111984c-cedd-4166-9b47-ebd65d901786/docker-build/0.log" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.724816 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.845023 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-buildworkdir\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.845097 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-node-pullsecrets\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.845126 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-build-blob-cache\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.845156 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-buildcachedir\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.845183 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-proxy-ca-bundles\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.845214 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-push\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.845249 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-root\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.845271 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.845357 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.846367 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.845311 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-system-configs\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.846631 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.846632 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-pull\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.846758 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-ca-bundles\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.846804 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-run\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.846989 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmfgt\" (UniqueName: \"kubernetes.io/projected/1111984c-cedd-4166-9b47-ebd65d901786-kube-api-access-mmfgt\") pod \"1111984c-cedd-4166-9b47-ebd65d901786\" (UID: \"1111984c-cedd-4166-9b47-ebd65d901786\") " Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.847225 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.847455 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.847691 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.847707 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.847718 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.847728 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.847743 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1111984c-cedd-4166-9b47-ebd65d901786-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.847756 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1111984c-cedd-4166-9b47-ebd65d901786-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.848065 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.853186 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.853566 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.854068 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1111984c-cedd-4166-9b47-ebd65d901786-kube-api-access-mmfgt" (OuterVolumeSpecName: "kube-api-access-mmfgt") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "kube-api-access-mmfgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.937558 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.949296 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.949337 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.949347 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmfgt\" (UniqueName: \"kubernetes.io/projected/1111984c-cedd-4166-9b47-ebd65d901786-kube-api-access-mmfgt\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.949358 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.949367 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/1111984c-cedd-4166-9b47-ebd65d901786-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:11 crc kubenswrapper[4917]: I1212 00:28:11.994952 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1111984c-cedd-4166-9b47-ebd65d901786" (UID: "1111984c-cedd-4166-9b47-ebd65d901786"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:28:12 crc kubenswrapper[4917]: I1212 00:28:12.051319 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1111984c-cedd-4166-9b47-ebd65d901786-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:28:12 crc kubenswrapper[4917]: I1212 00:28:12.068026 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_1111984c-cedd-4166-9b47-ebd65d901786/docker-build/0.log" Dec 12 00:28:12 crc kubenswrapper[4917]: I1212 00:28:12.069083 4917 generic.go:334] "Generic (PLEG): container finished" podID="1111984c-cedd-4166-9b47-ebd65d901786" containerID="59f2d40a89b007efbaa5c9e1543a1beb8c828072577389de1eb98b15e572960e" exitCode=1 Dec 12 00:28:12 crc kubenswrapper[4917]: I1212 00:28:12.069134 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1111984c-cedd-4166-9b47-ebd65d901786","Type":"ContainerDied","Data":"59f2d40a89b007efbaa5c9e1543a1beb8c828072577389de1eb98b15e572960e"} Dec 12 00:28:12 crc kubenswrapper[4917]: I1212 00:28:12.069236 4917 scope.go:117] "RemoveContainer" containerID="59f2d40a89b007efbaa5c9e1543a1beb8c828072577389de1eb98b15e572960e" Dec 12 00:28:12 crc kubenswrapper[4917]: I1212 00:28:12.106416 4917 scope.go:117] "RemoveContainer" containerID="b1b16fa767c4994cc56d83fcd160bad27d2df72a459e6804b11a065fa3501061" Dec 12 00:28:13 crc kubenswrapper[4917]: I1212 00:28:13.078801 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"641f61fa-3761-480d-9d1b-f5a10f5b62f4","Type":"ContainerStarted","Data":"d15fe5bebb16ad49d6e09342cedbe96fe1ef62247a5aa9a942b0e337ed46590a"} Dec 12 00:28:13 crc kubenswrapper[4917]: I1212 00:28:13.080285 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1111984c-cedd-4166-9b47-ebd65d901786","Type":"ContainerDied","Data":"1ea7a021e8648a22b8155a018cecf29db3a0fcdacf4bfcd4f612c452dbe121a8"} Dec 12 00:28:13 crc kubenswrapper[4917]: I1212 00:28:13.080331 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 12 00:28:13 crc kubenswrapper[4917]: I1212 00:28:13.126621 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 12 00:28:13 crc kubenswrapper[4917]: I1212 00:28:13.132860 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 12 00:28:13 crc kubenswrapper[4917]: I1212 00:28:13.617600 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1111984c-cedd-4166-9b47-ebd65d901786" path="/var/lib/kubelet/pods/1111984c-cedd-4166-9b47-ebd65d901786/volumes" Dec 12 00:28:14 crc kubenswrapper[4917]: I1212 00:28:14.089702 4917 generic.go:334] "Generic (PLEG): container finished" podID="641f61fa-3761-480d-9d1b-f5a10f5b62f4" containerID="d15fe5bebb16ad49d6e09342cedbe96fe1ef62247a5aa9a942b0e337ed46590a" exitCode=0 Dec 12 00:28:14 crc kubenswrapper[4917]: I1212 00:28:14.089790 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"641f61fa-3761-480d-9d1b-f5a10f5b62f4","Type":"ContainerDied","Data":"d15fe5bebb16ad49d6e09342cedbe96fe1ef62247a5aa9a942b0e337ed46590a"} Dec 12 00:28:15 crc kubenswrapper[4917]: I1212 00:28:15.107445 4917 generic.go:334] "Generic (PLEG): container finished" podID="641f61fa-3761-480d-9d1b-f5a10f5b62f4" containerID="7fcb3ba66cfb969dfc17d9c2ff52a432a982a30d3b2f8ebddbd23c28c2cbe23d" exitCode=0 Dec 12 00:28:15 crc kubenswrapper[4917]: I1212 00:28:15.107813 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"641f61fa-3761-480d-9d1b-f5a10f5b62f4","Type":"ContainerDied","Data":"7fcb3ba66cfb969dfc17d9c2ff52a432a982a30d3b2f8ebddbd23c28c2cbe23d"} Dec 12 00:28:15 crc kubenswrapper[4917]: I1212 00:28:15.155401 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_641f61fa-3761-480d-9d1b-f5a10f5b62f4/manage-dockerfile/0.log" Dec 12 00:28:16 crc kubenswrapper[4917]: I1212 00:28:16.117757 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"641f61fa-3761-480d-9d1b-f5a10f5b62f4","Type":"ContainerStarted","Data":"239f1b203fecdd562d6f0cf5d7604343a6a42097b11e9593737ee753112a7b33"} Dec 12 00:28:16 crc kubenswrapper[4917]: I1212 00:28:16.148502 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=10.148471428 podStartE2EDuration="10.148471428s" podCreationTimestamp="2025-12-12 00:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:28:16.145001788 +0000 UTC m=+1330.922802621" watchObservedRunningTime="2025-12-12 00:28:16.148471428 +0000 UTC m=+1330.926272251" Dec 12 00:28:59 crc kubenswrapper[4917]: I1212 00:28:59.640151 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:28:59 crc kubenswrapper[4917]: I1212 00:28:59.641300 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:29:29 crc kubenswrapper[4917]: I1212 00:29:29.662513 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:29:29 crc kubenswrapper[4917]: I1212 00:29:29.667134 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:29:59 crc kubenswrapper[4917]: I1212 00:29:59.826033 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:29:59 crc kubenswrapper[4917]: I1212 00:29:59.827327 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:29:59 crc kubenswrapper[4917]: I1212 00:29:59.837041 4917 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:29:59 crc kubenswrapper[4917]: I1212 00:29:59.837863 4917 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8ec3b647c80a5906eabbf0fa7c865967e32995db04bf0586010c232bc53706c"} pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 00:29:59 crc kubenswrapper[4917]: I1212 00:29:59.837941 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" containerID="cri-o://c8ec3b647c80a5906eabbf0fa7c865967e32995db04bf0586010c232bc53706c" gracePeriod=600 Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.158728 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj"] Dec 12 00:30:00 crc kubenswrapper[4917]: E1212 00:30:00.160126 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1111984c-cedd-4166-9b47-ebd65d901786" containerName="docker-build" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.160156 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1111984c-cedd-4166-9b47-ebd65d901786" containerName="docker-build" Dec 12 00:30:00 crc kubenswrapper[4917]: E1212 00:30:00.160173 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1111984c-cedd-4166-9b47-ebd65d901786" containerName="manage-dockerfile" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.160184 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1111984c-cedd-4166-9b47-ebd65d901786" containerName="manage-dockerfile" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.160327 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="1111984c-cedd-4166-9b47-ebd65d901786" containerName="docker-build" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.160988 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.164277 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.165547 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.179000 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj"] Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.244742 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b360dbf0-fcd9-4347-9009-207b5168daeb-secret-volume\") pod \"collect-profiles-29424990-ql8lj\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.244876 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b360dbf0-fcd9-4347-9009-207b5168daeb-config-volume\") pod \"collect-profiles-29424990-ql8lj\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.244981 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd77b\" (UniqueName: \"kubernetes.io/projected/b360dbf0-fcd9-4347-9009-207b5168daeb-kube-api-access-sd77b\") pod \"collect-profiles-29424990-ql8lj\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.346347 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd77b\" (UniqueName: \"kubernetes.io/projected/b360dbf0-fcd9-4347-9009-207b5168daeb-kube-api-access-sd77b\") pod \"collect-profiles-29424990-ql8lj\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.346456 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b360dbf0-fcd9-4347-9009-207b5168daeb-secret-volume\") pod \"collect-profiles-29424990-ql8lj\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.346524 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b360dbf0-fcd9-4347-9009-207b5168daeb-config-volume\") pod \"collect-profiles-29424990-ql8lj\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.347982 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b360dbf0-fcd9-4347-9009-207b5168daeb-config-volume\") pod \"collect-profiles-29424990-ql8lj\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.361402 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b360dbf0-fcd9-4347-9009-207b5168daeb-secret-volume\") pod \"collect-profiles-29424990-ql8lj\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.377801 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd77b\" (UniqueName: \"kubernetes.io/projected/b360dbf0-fcd9-4347-9009-207b5168daeb-kube-api-access-sd77b\") pod \"collect-profiles-29424990-ql8lj\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.484534 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.804884 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj"] Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.974174 4917 generic.go:334] "Generic (PLEG): container finished" podID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerID="c8ec3b647c80a5906eabbf0fa7c865967e32995db04bf0586010c232bc53706c" exitCode=0 Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.974232 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerDied","Data":"c8ec3b647c80a5906eabbf0fa7c865967e32995db04bf0586010c232bc53706c"} Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.974299 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542"} Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.974324 4917 scope.go:117] "RemoveContainer" containerID="dbf52aec17a34453ffa3035aa47c1f1caf454e50712594c9e392bfa350bed490" Dec 12 00:30:00 crc kubenswrapper[4917]: I1212 00:30:00.976169 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" event={"ID":"b360dbf0-fcd9-4347-9009-207b5168daeb","Type":"ContainerStarted","Data":"ca912c3ab100d722f0967573f412622d601340e187253dea2fc82024ff483131"} Dec 12 00:30:01 crc kubenswrapper[4917]: I1212 00:30:01.986044 4917 generic.go:334] "Generic (PLEG): container finished" podID="b360dbf0-fcd9-4347-9009-207b5168daeb" containerID="84c6c9c5f99355b6c74d69fbdf2cbdbf4023e6a3e7b8cedfdea5520fff9d0e1e" exitCode=0 Dec 12 00:30:01 crc kubenswrapper[4917]: I1212 00:30:01.986195 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" event={"ID":"b360dbf0-fcd9-4347-9009-207b5168daeb","Type":"ContainerDied","Data":"84c6c9c5f99355b6c74d69fbdf2cbdbf4023e6a3e7b8cedfdea5520fff9d0e1e"} Dec 12 00:30:03 crc kubenswrapper[4917]: I1212 00:30:03.266134 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:03 crc kubenswrapper[4917]: I1212 00:30:03.292921 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b360dbf0-fcd9-4347-9009-207b5168daeb-secret-volume\") pod \"b360dbf0-fcd9-4347-9009-207b5168daeb\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " Dec 12 00:30:03 crc kubenswrapper[4917]: I1212 00:30:03.293033 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b360dbf0-fcd9-4347-9009-207b5168daeb-config-volume\") pod \"b360dbf0-fcd9-4347-9009-207b5168daeb\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " Dec 12 00:30:03 crc kubenswrapper[4917]: I1212 00:30:03.293097 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd77b\" (UniqueName: \"kubernetes.io/projected/b360dbf0-fcd9-4347-9009-207b5168daeb-kube-api-access-sd77b\") pod \"b360dbf0-fcd9-4347-9009-207b5168daeb\" (UID: \"b360dbf0-fcd9-4347-9009-207b5168daeb\") " Dec 12 00:30:03 crc kubenswrapper[4917]: I1212 00:30:03.294373 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b360dbf0-fcd9-4347-9009-207b5168daeb-config-volume" (OuterVolumeSpecName: "config-volume") pod "b360dbf0-fcd9-4347-9009-207b5168daeb" (UID: "b360dbf0-fcd9-4347-9009-207b5168daeb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:30:03 crc kubenswrapper[4917]: I1212 00:30:03.301024 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b360dbf0-fcd9-4347-9009-207b5168daeb-kube-api-access-sd77b" (OuterVolumeSpecName: "kube-api-access-sd77b") pod "b360dbf0-fcd9-4347-9009-207b5168daeb" (UID: "b360dbf0-fcd9-4347-9009-207b5168daeb"). InnerVolumeSpecName "kube-api-access-sd77b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:30:03 crc kubenswrapper[4917]: I1212 00:30:03.301153 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b360dbf0-fcd9-4347-9009-207b5168daeb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b360dbf0-fcd9-4347-9009-207b5168daeb" (UID: "b360dbf0-fcd9-4347-9009-207b5168daeb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:30:03 crc kubenswrapper[4917]: I1212 00:30:03.395415 4917 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b360dbf0-fcd9-4347-9009-207b5168daeb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 00:30:03 crc kubenswrapper[4917]: I1212 00:30:03.395898 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd77b\" (UniqueName: \"kubernetes.io/projected/b360dbf0-fcd9-4347-9009-207b5168daeb-kube-api-access-sd77b\") on node \"crc\" DevicePath \"\"" Dec 12 00:30:03 crc kubenswrapper[4917]: I1212 00:30:03.395919 4917 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b360dbf0-fcd9-4347-9009-207b5168daeb-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 00:30:04 crc kubenswrapper[4917]: I1212 00:30:04.006626 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" event={"ID":"b360dbf0-fcd9-4347-9009-207b5168daeb","Type":"ContainerDied","Data":"ca912c3ab100d722f0967573f412622d601340e187253dea2fc82024ff483131"} Dec 12 00:30:04 crc kubenswrapper[4917]: I1212 00:30:04.006703 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca912c3ab100d722f0967573f412622d601340e187253dea2fc82024ff483131" Dec 12 00:30:04 crc kubenswrapper[4917]: I1212 00:30:04.006746 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424990-ql8lj" Dec 12 00:30:14 crc kubenswrapper[4917]: I1212 00:30:14.427243 4917 trace.go:236] Trace[1989801507]: "Calculate volume metrics of container-storage-root for pod service-telemetry/sg-core-2-build" (12-Dec-2025 00:30:12.969) (total time: 1457ms): Dec 12 00:30:14 crc kubenswrapper[4917]: Trace[1989801507]: [1.457666638s] [1.457666638s] END Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.041862 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b8qb7"] Dec 12 00:30:24 crc kubenswrapper[4917]: E1212 00:30:24.043316 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b360dbf0-fcd9-4347-9009-207b5168daeb" containerName="collect-profiles" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.043341 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="b360dbf0-fcd9-4347-9009-207b5168daeb" containerName="collect-profiles" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.043540 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="b360dbf0-fcd9-4347-9009-207b5168daeb" containerName="collect-profiles" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.044924 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.059684 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8qb7"] Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.201555 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-utilities\") pod \"redhat-operators-b8qb7\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.201627 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj4cd\" (UniqueName: \"kubernetes.io/projected/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-kube-api-access-sj4cd\") pod \"redhat-operators-b8qb7\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.201682 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-catalog-content\") pod \"redhat-operators-b8qb7\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.303226 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj4cd\" (UniqueName: \"kubernetes.io/projected/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-kube-api-access-sj4cd\") pod \"redhat-operators-b8qb7\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.303325 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-catalog-content\") pod \"redhat-operators-b8qb7\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.303433 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-utilities\") pod \"redhat-operators-b8qb7\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.304089 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-utilities\") pod \"redhat-operators-b8qb7\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.304091 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-catalog-content\") pod \"redhat-operators-b8qb7\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.329067 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj4cd\" (UniqueName: \"kubernetes.io/projected/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-kube-api-access-sj4cd\") pod \"redhat-operators-b8qb7\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.366132 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:24 crc kubenswrapper[4917]: I1212 00:30:24.611584 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8qb7"] Dec 12 00:30:25 crc kubenswrapper[4917]: I1212 00:30:25.147751 4917 generic.go:334] "Generic (PLEG): container finished" podID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerID="4744241dbdeb8a7ad325a1cdd4e4748c10453e4e7ea3edf525d46dcbf588c550" exitCode=0 Dec 12 00:30:25 crc kubenswrapper[4917]: I1212 00:30:25.147800 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb7" event={"ID":"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c","Type":"ContainerDied","Data":"4744241dbdeb8a7ad325a1cdd4e4748c10453e4e7ea3edf525d46dcbf588c550"} Dec 12 00:30:25 crc kubenswrapper[4917]: I1212 00:30:25.147833 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb7" event={"ID":"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c","Type":"ContainerStarted","Data":"d5998e3dcf01caa767cb315fb0a1c41d8c73c22256153d97351e88c3b52c74f0"} Dec 12 00:30:25 crc kubenswrapper[4917]: I1212 00:30:25.150455 4917 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 00:30:27 crc kubenswrapper[4917]: I1212 00:30:27.167573 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb7" event={"ID":"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c","Type":"ContainerStarted","Data":"1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa"} Dec 12 00:30:28 crc kubenswrapper[4917]: I1212 00:30:28.175590 4917 generic.go:334] "Generic (PLEG): container finished" podID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerID="1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa" exitCode=0 Dec 12 00:30:28 crc kubenswrapper[4917]: I1212 00:30:28.175687 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb7" event={"ID":"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c","Type":"ContainerDied","Data":"1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa"} Dec 12 00:30:29 crc kubenswrapper[4917]: I1212 00:30:29.224971 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb7" event={"ID":"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c","Type":"ContainerStarted","Data":"2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53"} Dec 12 00:30:29 crc kubenswrapper[4917]: I1212 00:30:29.248575 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b8qb7" podStartSLOduration=1.48225097 podStartE2EDuration="5.248551029s" podCreationTimestamp="2025-12-12 00:30:24 +0000 UTC" firstStartedPulling="2025-12-12 00:30:25.15012701 +0000 UTC m=+1459.927927823" lastFinishedPulling="2025-12-12 00:30:28.916427069 +0000 UTC m=+1463.694227882" observedRunningTime="2025-12-12 00:30:29.244862121 +0000 UTC m=+1464.022662944" watchObservedRunningTime="2025-12-12 00:30:29.248551029 +0000 UTC m=+1464.026351862" Dec 12 00:30:34 crc kubenswrapper[4917]: I1212 00:30:34.367099 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:34 crc kubenswrapper[4917]: I1212 00:30:34.367713 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:35 crc kubenswrapper[4917]: I1212 00:30:35.408704 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b8qb7" podUID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerName="registry-server" probeResult="failure" output=< Dec 12 00:30:35 crc kubenswrapper[4917]: timeout: failed to connect service ":50051" within 1s Dec 12 00:30:35 crc kubenswrapper[4917]: > Dec 12 00:30:44 crc kubenswrapper[4917]: I1212 00:30:44.412505 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:44 crc kubenswrapper[4917]: I1212 00:30:44.470860 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:44 crc kubenswrapper[4917]: I1212 00:30:44.653791 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8qb7"] Dec 12 00:30:46 crc kubenswrapper[4917]: I1212 00:30:46.380251 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b8qb7" podUID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerName="registry-server" containerID="cri-o://2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53" gracePeriod=2 Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.162139 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.201590 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-utilities\") pod \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.201724 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj4cd\" (UniqueName: \"kubernetes.io/projected/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-kube-api-access-sj4cd\") pod \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.201764 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-catalog-content\") pod \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\" (UID: \"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c\") " Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.202902 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-utilities" (OuterVolumeSpecName: "utilities") pod "01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" (UID: "01c1eb28-cc93-4e9f-b8b4-b3125bafc15c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.207834 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-kube-api-access-sj4cd" (OuterVolumeSpecName: "kube-api-access-sj4cd") pod "01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" (UID: "01c1eb28-cc93-4e9f-b8b4-b3125bafc15c"). InnerVolumeSpecName "kube-api-access-sj4cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.303412 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.303459 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj4cd\" (UniqueName: \"kubernetes.io/projected/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-kube-api-access-sj4cd\") on node \"crc\" DevicePath \"\"" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.317841 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" (UID: "01c1eb28-cc93-4e9f-b8b4-b3125bafc15c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.396828 4917 generic.go:334] "Generic (PLEG): container finished" podID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerID="2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53" exitCode=0 Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.396890 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb7" event={"ID":"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c","Type":"ContainerDied","Data":"2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53"} Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.396938 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8qb7" event={"ID":"01c1eb28-cc93-4e9f-b8b4-b3125bafc15c","Type":"ContainerDied","Data":"d5998e3dcf01caa767cb315fb0a1c41d8c73c22256153d97351e88c3b52c74f0"} Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.396937 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8qb7" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.396962 4917 scope.go:117] "RemoveContainer" containerID="2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.404920 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.417549 4917 scope.go:117] "RemoveContainer" containerID="1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.442489 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8qb7"] Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.448471 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b8qb7"] Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.462622 4917 scope.go:117] "RemoveContainer" containerID="4744241dbdeb8a7ad325a1cdd4e4748c10453e4e7ea3edf525d46dcbf588c550" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.483464 4917 scope.go:117] "RemoveContainer" containerID="2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53" Dec 12 00:30:48 crc kubenswrapper[4917]: E1212 00:30:48.484075 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53\": container with ID starting with 2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53 not found: ID does not exist" containerID="2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.484164 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53"} err="failed to get container status \"2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53\": rpc error: code = NotFound desc = could not find container \"2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53\": container with ID starting with 2883544fd22d440224f34576c79183cfcc871635328324e2b84926f20b0d9f53 not found: ID does not exist" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.484214 4917 scope.go:117] "RemoveContainer" containerID="1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa" Dec 12 00:30:48 crc kubenswrapper[4917]: E1212 00:30:48.484698 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa\": container with ID starting with 1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa not found: ID does not exist" containerID="1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.484752 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa"} err="failed to get container status \"1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa\": rpc error: code = NotFound desc = could not find container \"1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa\": container with ID starting with 1615a86b1418b099f2cc4b15ce6a8e4eaebe1090bb058b70f351bb83a9ac16fa not found: ID does not exist" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.484784 4917 scope.go:117] "RemoveContainer" containerID="4744241dbdeb8a7ad325a1cdd4e4748c10453e4e7ea3edf525d46dcbf588c550" Dec 12 00:30:48 crc kubenswrapper[4917]: E1212 00:30:48.486904 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4744241dbdeb8a7ad325a1cdd4e4748c10453e4e7ea3edf525d46dcbf588c550\": container with ID starting with 4744241dbdeb8a7ad325a1cdd4e4748c10453e4e7ea3edf525d46dcbf588c550 not found: ID does not exist" containerID="4744241dbdeb8a7ad325a1cdd4e4748c10453e4e7ea3edf525d46dcbf588c550" Dec 12 00:30:48 crc kubenswrapper[4917]: I1212 00:30:48.486966 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4744241dbdeb8a7ad325a1cdd4e4748c10453e4e7ea3edf525d46dcbf588c550"} err="failed to get container status \"4744241dbdeb8a7ad325a1cdd4e4748c10453e4e7ea3edf525d46dcbf588c550\": rpc error: code = NotFound desc = could not find container \"4744241dbdeb8a7ad325a1cdd4e4748c10453e4e7ea3edf525d46dcbf588c550\": container with ID starting with 4744241dbdeb8a7ad325a1cdd4e4748c10453e4e7ea3edf525d46dcbf588c550 not found: ID does not exist" Dec 12 00:30:49 crc kubenswrapper[4917]: I1212 00:30:49.608833 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" path="/var/lib/kubelet/pods/01c1eb28-cc93-4e9f-b8b4-b3125bafc15c/volumes" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.183869 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n7dpx"] Dec 12 00:31:01 crc kubenswrapper[4917]: E1212 00:31:01.185102 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerName="extract-utilities" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.185122 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerName="extract-utilities" Dec 12 00:31:01 crc kubenswrapper[4917]: E1212 00:31:01.185145 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerName="registry-server" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.185154 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerName="registry-server" Dec 12 00:31:01 crc kubenswrapper[4917]: E1212 00:31:01.185168 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerName="extract-content" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.185176 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerName="extract-content" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.185316 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c1eb28-cc93-4e9f-b8b4-b3125bafc15c" containerName="registry-server" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.186331 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.196156 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n7dpx"] Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.296021 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpl58\" (UniqueName: \"kubernetes.io/projected/4ef6fc63-019d-4e87-8c37-23ed24ee0020-kube-api-access-zpl58\") pod \"community-operators-n7dpx\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.296112 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-catalog-content\") pod \"community-operators-n7dpx\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.296143 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-utilities\") pod \"community-operators-n7dpx\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.397728 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-catalog-content\") pod \"community-operators-n7dpx\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.397789 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-utilities\") pod \"community-operators-n7dpx\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.397917 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpl58\" (UniqueName: \"kubernetes.io/projected/4ef6fc63-019d-4e87-8c37-23ed24ee0020-kube-api-access-zpl58\") pod \"community-operators-n7dpx\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.398348 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-catalog-content\") pod \"community-operators-n7dpx\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.398442 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-utilities\") pod \"community-operators-n7dpx\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.418263 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpl58\" (UniqueName: \"kubernetes.io/projected/4ef6fc63-019d-4e87-8c37-23ed24ee0020-kube-api-access-zpl58\") pod \"community-operators-n7dpx\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.506090 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:01 crc kubenswrapper[4917]: I1212 00:31:01.775484 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n7dpx"] Dec 12 00:31:02 crc kubenswrapper[4917]: I1212 00:31:02.491739 4917 generic.go:334] "Generic (PLEG): container finished" podID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" containerID="8e6e66827afbfed69957094d2af84acedea3262ae61377811080d77ff36d0bc3" exitCode=0 Dec 12 00:31:02 crc kubenswrapper[4917]: I1212 00:31:02.491858 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7dpx" event={"ID":"4ef6fc63-019d-4e87-8c37-23ed24ee0020","Type":"ContainerDied","Data":"8e6e66827afbfed69957094d2af84acedea3262ae61377811080d77ff36d0bc3"} Dec 12 00:31:02 crc kubenswrapper[4917]: I1212 00:31:02.494246 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7dpx" event={"ID":"4ef6fc63-019d-4e87-8c37-23ed24ee0020","Type":"ContainerStarted","Data":"4691e560407105ce0eba11644f131673f2f72b11a94dbb586bb9c021e6cd6e3a"} Dec 12 00:31:03 crc kubenswrapper[4917]: I1212 00:31:03.503281 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7dpx" event={"ID":"4ef6fc63-019d-4e87-8c37-23ed24ee0020","Type":"ContainerStarted","Data":"bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d"} Dec 12 00:31:04 crc kubenswrapper[4917]: I1212 00:31:04.511810 4917 generic.go:334] "Generic (PLEG): container finished" podID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" containerID="bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d" exitCode=0 Dec 12 00:31:04 crc kubenswrapper[4917]: I1212 00:31:04.511887 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7dpx" event={"ID":"4ef6fc63-019d-4e87-8c37-23ed24ee0020","Type":"ContainerDied","Data":"bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d"} Dec 12 00:31:05 crc kubenswrapper[4917]: I1212 00:31:05.520922 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7dpx" event={"ID":"4ef6fc63-019d-4e87-8c37-23ed24ee0020","Type":"ContainerStarted","Data":"106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3"} Dec 12 00:31:05 crc kubenswrapper[4917]: I1212 00:31:05.543550 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n7dpx" podStartSLOduration=1.806684085 podStartE2EDuration="4.543512498s" podCreationTimestamp="2025-12-12 00:31:01 +0000 UTC" firstStartedPulling="2025-12-12 00:31:02.493352913 +0000 UTC m=+1497.271153726" lastFinishedPulling="2025-12-12 00:31:05.230181326 +0000 UTC m=+1500.007982139" observedRunningTime="2025-12-12 00:31:05.539885762 +0000 UTC m=+1500.317686565" watchObservedRunningTime="2025-12-12 00:31:05.543512498 +0000 UTC m=+1500.321313321" Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.558384 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4z494"] Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.560822 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.577151 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4z494"] Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.606838 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44lhl\" (UniqueName: \"kubernetes.io/projected/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-kube-api-access-44lhl\") pod \"certified-operators-4z494\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.606994 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-utilities\") pod \"certified-operators-4z494\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.607077 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-catalog-content\") pod \"certified-operators-4z494\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.709035 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44lhl\" (UniqueName: \"kubernetes.io/projected/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-kube-api-access-44lhl\") pod \"certified-operators-4z494\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.709156 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-utilities\") pod \"certified-operators-4z494\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.709184 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-catalog-content\") pod \"certified-operators-4z494\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.709782 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-utilities\") pod \"certified-operators-4z494\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.709805 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-catalog-content\") pod \"certified-operators-4z494\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.734340 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44lhl\" (UniqueName: \"kubernetes.io/projected/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-kube-api-access-44lhl\") pod \"certified-operators-4z494\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:08 crc kubenswrapper[4917]: I1212 00:31:08.885565 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:09 crc kubenswrapper[4917]: I1212 00:31:09.447467 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4z494"] Dec 12 00:31:09 crc kubenswrapper[4917]: I1212 00:31:09.546344 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z494" event={"ID":"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017","Type":"ContainerStarted","Data":"11db9f51f82b7544c8c69392aa237343aa50bb7cf9e39319bd36ad10b0f00666"} Dec 12 00:31:10 crc kubenswrapper[4917]: I1212 00:31:10.555791 4917 generic.go:334] "Generic (PLEG): container finished" podID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" containerID="507dca4cb6bcb3ff89059aebc57997bc7e56e9d6ac402b7277824106032aec94" exitCode=0 Dec 12 00:31:10 crc kubenswrapper[4917]: I1212 00:31:10.555898 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z494" event={"ID":"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017","Type":"ContainerDied","Data":"507dca4cb6bcb3ff89059aebc57997bc7e56e9d6ac402b7277824106032aec94"} Dec 12 00:31:11 crc kubenswrapper[4917]: I1212 00:31:11.506812 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:11 crc kubenswrapper[4917]: I1212 00:31:11.506893 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:11 crc kubenswrapper[4917]: I1212 00:31:11.554555 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:11 crc kubenswrapper[4917]: I1212 00:31:11.608751 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:12 crc kubenswrapper[4917]: I1212 00:31:12.572573 4917 generic.go:334] "Generic (PLEG): container finished" podID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" containerID="1f558d00f41f11861f3218d0e1e1904429397844ab295cda8d3f0324c8bcf753" exitCode=0 Dec 12 00:31:12 crc kubenswrapper[4917]: I1212 00:31:12.572692 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z494" event={"ID":"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017","Type":"ContainerDied","Data":"1f558d00f41f11861f3218d0e1e1904429397844ab295cda8d3f0324c8bcf753"} Dec 12 00:31:12 crc kubenswrapper[4917]: I1212 00:31:12.751415 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n7dpx"] Dec 12 00:31:13 crc kubenswrapper[4917]: I1212 00:31:13.585490 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z494" event={"ID":"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017","Type":"ContainerStarted","Data":"9d7f163ebb5e41d3ac5004d37f536568e3475e0ebb91081129ea0404dd2abd61"} Dec 12 00:31:13 crc kubenswrapper[4917]: I1212 00:31:13.585829 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n7dpx" podUID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" containerName="registry-server" containerID="cri-o://106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3" gracePeriod=2 Dec 12 00:31:13 crc kubenswrapper[4917]: I1212 00:31:13.619084 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4z494" podStartSLOduration=3.06311718 podStartE2EDuration="5.619060455s" podCreationTimestamp="2025-12-12 00:31:08 +0000 UTC" firstStartedPulling="2025-12-12 00:31:10.557808016 +0000 UTC m=+1505.335608829" lastFinishedPulling="2025-12-12 00:31:13.113751291 +0000 UTC m=+1507.891552104" observedRunningTime="2025-12-12 00:31:13.614378731 +0000 UTC m=+1508.392179554" watchObservedRunningTime="2025-12-12 00:31:13.619060455 +0000 UTC m=+1508.396861268" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.036055 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.095419 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-catalog-content\") pod \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.102180 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpl58\" (UniqueName: \"kubernetes.io/projected/4ef6fc63-019d-4e87-8c37-23ed24ee0020-kube-api-access-zpl58\") pod \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.102217 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-utilities\") pod \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\" (UID: \"4ef6fc63-019d-4e87-8c37-23ed24ee0020\") " Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.103554 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-utilities" (OuterVolumeSpecName: "utilities") pod "4ef6fc63-019d-4e87-8c37-23ed24ee0020" (UID: "4ef6fc63-019d-4e87-8c37-23ed24ee0020"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.113937 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef6fc63-019d-4e87-8c37-23ed24ee0020-kube-api-access-zpl58" (OuterVolumeSpecName: "kube-api-access-zpl58") pod "4ef6fc63-019d-4e87-8c37-23ed24ee0020" (UID: "4ef6fc63-019d-4e87-8c37-23ed24ee0020"). InnerVolumeSpecName "kube-api-access-zpl58". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.152721 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ef6fc63-019d-4e87-8c37-23ed24ee0020" (UID: "4ef6fc63-019d-4e87-8c37-23ed24ee0020"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.204551 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.204600 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpl58\" (UniqueName: \"kubernetes.io/projected/4ef6fc63-019d-4e87-8c37-23ed24ee0020-kube-api-access-zpl58\") on node \"crc\" DevicePath \"\"" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.204617 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef6fc63-019d-4e87-8c37-23ed24ee0020-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.595188 4917 generic.go:334] "Generic (PLEG): container finished" podID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" containerID="106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3" exitCode=0 Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.595250 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7dpx" event={"ID":"4ef6fc63-019d-4e87-8c37-23ed24ee0020","Type":"ContainerDied","Data":"106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3"} Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.595782 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7dpx" event={"ID":"4ef6fc63-019d-4e87-8c37-23ed24ee0020","Type":"ContainerDied","Data":"4691e560407105ce0eba11644f131673f2f72b11a94dbb586bb9c021e6cd6e3a"} Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.595826 4917 scope.go:117] "RemoveContainer" containerID="106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.595296 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7dpx" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.619087 4917 scope.go:117] "RemoveContainer" containerID="bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.627214 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n7dpx"] Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.646224 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n7dpx"] Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.646522 4917 scope.go:117] "RemoveContainer" containerID="8e6e66827afbfed69957094d2af84acedea3262ae61377811080d77ff36d0bc3" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.659878 4917 scope.go:117] "RemoveContainer" containerID="106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3" Dec 12 00:31:14 crc kubenswrapper[4917]: E1212 00:31:14.660450 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3\": container with ID starting with 106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3 not found: ID does not exist" containerID="106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.660483 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3"} err="failed to get container status \"106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3\": rpc error: code = NotFound desc = could not find container \"106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3\": container with ID starting with 106ce6c1cc5fc339a4cca122873d1467c3339205ecff860e76a773e0fef19aa3 not found: ID does not exist" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.660512 4917 scope.go:117] "RemoveContainer" containerID="bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d" Dec 12 00:31:14 crc kubenswrapper[4917]: E1212 00:31:14.662194 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d\": container with ID starting with bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d not found: ID does not exist" containerID="bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.662228 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d"} err="failed to get container status \"bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d\": rpc error: code = NotFound desc = could not find container \"bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d\": container with ID starting with bc4b19a3defef8f888abcf9317fc26f05a5c88735ad582c88bc5dd7901091b6d not found: ID does not exist" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.662247 4917 scope.go:117] "RemoveContainer" containerID="8e6e66827afbfed69957094d2af84acedea3262ae61377811080d77ff36d0bc3" Dec 12 00:31:14 crc kubenswrapper[4917]: E1212 00:31:14.662517 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e6e66827afbfed69957094d2af84acedea3262ae61377811080d77ff36d0bc3\": container with ID starting with 8e6e66827afbfed69957094d2af84acedea3262ae61377811080d77ff36d0bc3 not found: ID does not exist" containerID="8e6e66827afbfed69957094d2af84acedea3262ae61377811080d77ff36d0bc3" Dec 12 00:31:14 crc kubenswrapper[4917]: I1212 00:31:14.662546 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e6e66827afbfed69957094d2af84acedea3262ae61377811080d77ff36d0bc3"} err="failed to get container status \"8e6e66827afbfed69957094d2af84acedea3262ae61377811080d77ff36d0bc3\": rpc error: code = NotFound desc = could not find container \"8e6e66827afbfed69957094d2af84acedea3262ae61377811080d77ff36d0bc3\": container with ID starting with 8e6e66827afbfed69957094d2af84acedea3262ae61377811080d77ff36d0bc3 not found: ID does not exist" Dec 12 00:31:15 crc kubenswrapper[4917]: I1212 00:31:15.611695 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" path="/var/lib/kubelet/pods/4ef6fc63-019d-4e87-8c37-23ed24ee0020/volumes" Dec 12 00:31:18 crc kubenswrapper[4917]: I1212 00:31:18.886152 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:18 crc kubenswrapper[4917]: I1212 00:31:18.886440 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:18 crc kubenswrapper[4917]: I1212 00:31:18.938751 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:19 crc kubenswrapper[4917]: I1212 00:31:19.670542 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:19 crc kubenswrapper[4917]: I1212 00:31:19.724840 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4z494"] Dec 12 00:31:21 crc kubenswrapper[4917]: I1212 00:31:21.638127 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4z494" podUID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" containerName="registry-server" containerID="cri-o://9d7f163ebb5e41d3ac5004d37f536568e3475e0ebb91081129ea0404dd2abd61" gracePeriod=2 Dec 12 00:31:22 crc kubenswrapper[4917]: I1212 00:31:22.646543 4917 generic.go:334] "Generic (PLEG): container finished" podID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" containerID="9d7f163ebb5e41d3ac5004d37f536568e3475e0ebb91081129ea0404dd2abd61" exitCode=0 Dec 12 00:31:22 crc kubenswrapper[4917]: I1212 00:31:22.646614 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z494" event={"ID":"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017","Type":"ContainerDied","Data":"9d7f163ebb5e41d3ac5004d37f536568e3475e0ebb91081129ea0404dd2abd61"} Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.528144 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.644866 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44lhl\" (UniqueName: \"kubernetes.io/projected/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-kube-api-access-44lhl\") pod \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.644944 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-utilities\") pod \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.645029 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-catalog-content\") pod \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\" (UID: \"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017\") " Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.646056 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-utilities" (OuterVolumeSpecName: "utilities") pod "1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" (UID: "1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.656080 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-kube-api-access-44lhl" (OuterVolumeSpecName: "kube-api-access-44lhl") pod "1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" (UID: "1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017"). InnerVolumeSpecName "kube-api-access-44lhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.674246 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z494" event={"ID":"1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017","Type":"ContainerDied","Data":"11db9f51f82b7544c8c69392aa237343aa50bb7cf9e39319bd36ad10b0f00666"} Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.674393 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z494" Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.674487 4917 scope.go:117] "RemoveContainer" containerID="9d7f163ebb5e41d3ac5004d37f536568e3475e0ebb91081129ea0404dd2abd61" Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.707623 4917 scope.go:117] "RemoveContainer" containerID="1f558d00f41f11861f3218d0e1e1904429397844ab295cda8d3f0324c8bcf753" Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.722026 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" (UID: "1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.723715 4917 scope.go:117] "RemoveContainer" containerID="507dca4cb6bcb3ff89059aebc57997bc7e56e9d6ac402b7277824106032aec94" Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.747250 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44lhl\" (UniqueName: \"kubernetes.io/projected/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-kube-api-access-44lhl\") on node \"crc\" DevicePath \"\"" Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.747285 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:31:23 crc kubenswrapper[4917]: I1212 00:31:23.747296 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:31:24 crc kubenswrapper[4917]: I1212 00:31:24.002339 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4z494"] Dec 12 00:31:24 crc kubenswrapper[4917]: I1212 00:31:24.007740 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4z494"] Dec 12 00:31:25 crc kubenswrapper[4917]: I1212 00:31:25.609217 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" path="/var/lib/kubelet/pods/1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017/volumes" Dec 12 00:32:29 crc kubenswrapper[4917]: I1212 00:32:29.639747 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:32:29 crc kubenswrapper[4917]: I1212 00:32:29.640741 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:32:43 crc kubenswrapper[4917]: I1212 00:32:43.603047 4917 generic.go:334] "Generic (PLEG): container finished" podID="641f61fa-3761-480d-9d1b-f5a10f5b62f4" containerID="239f1b203fecdd562d6f0cf5d7604343a6a42097b11e9593737ee753112a7b33" exitCode=0 Dec 12 00:32:43 crc kubenswrapper[4917]: I1212 00:32:43.610219 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"641f61fa-3761-480d-9d1b-f5a10f5b62f4","Type":"ContainerDied","Data":"239f1b203fecdd562d6f0cf5d7604343a6a42097b11e9593737ee753112a7b33"} Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.860693 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.909796 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildcachedir\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.909857 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-proxy-ca-bundles\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.909911 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-pull\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.909936 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-root\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.909964 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-ca-bundles\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.909986 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqvfv\" (UniqueName: \"kubernetes.io/projected/641f61fa-3761-480d-9d1b-f5a10f5b62f4-kube-api-access-zqvfv\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.910006 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-node-pullsecrets\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.910028 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-run\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.910046 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-blob-cache\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.910011 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.910245 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.911119 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.912532 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.912960 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.913335 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.918116 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:32:44 crc kubenswrapper[4917]: I1212 00:32:44.918156 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/641f61fa-3761-480d-9d1b-f5a10f5b62f4-kube-api-access-zqvfv" (OuterVolumeSpecName: "kube-api-access-zqvfv") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "kube-api-access-zqvfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.108834 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-push\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.108880 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-system-configs\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.108898 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildworkdir\") pod \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\" (UID: \"641f61fa-3761-480d-9d1b-f5a10f5b62f4\") " Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.109077 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/641f61fa-3761-480d-9d1b-f5a10f5b62f4-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.109090 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.109101 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.109110 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.109119 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.109129 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqvfv\" (UniqueName: \"kubernetes.io/projected/641f61fa-3761-480d-9d1b-f5a10f5b62f4-kube-api-access-zqvfv\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.114041 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.114975 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.122594 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.210360 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/641f61fa-3761-480d-9d1b-f5a10f5b62f4-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.210796 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.210879 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.344475 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.413337 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.621737 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"641f61fa-3761-480d-9d1b-f5a10f5b62f4","Type":"ContainerDied","Data":"dfd720f16ea31b39d4379e5a7d49ab1557cc25254ad9538c7b78cab91b7447e0"} Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.621795 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd720f16ea31b39d4379e5a7d49ab1557cc25254ad9538c7b78cab91b7447e0" Dec 12 00:32:45 crc kubenswrapper[4917]: I1212 00:32:45.622124 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 12 00:32:47 crc kubenswrapper[4917]: I1212 00:32:47.812462 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "641f61fa-3761-480d-9d1b-f5a10f5b62f4" (UID: "641f61fa-3761-480d-9d1b-f5a10f5b62f4"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:32:47 crc kubenswrapper[4917]: I1212 00:32:47.915883 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/641f61fa-3761-480d-9d1b-f5a10f5b62f4-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.092754 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 12 00:32:50 crc kubenswrapper[4917]: E1212 00:32:50.093611 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641f61fa-3761-480d-9d1b-f5a10f5b62f4" containerName="git-clone" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093659 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="641f61fa-3761-480d-9d1b-f5a10f5b62f4" containerName="git-clone" Dec 12 00:32:50 crc kubenswrapper[4917]: E1212 00:32:50.093672 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" containerName="registry-server" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093678 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" containerName="registry-server" Dec 12 00:32:50 crc kubenswrapper[4917]: E1212 00:32:50.093688 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641f61fa-3761-480d-9d1b-f5a10f5b62f4" containerName="docker-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093697 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="641f61fa-3761-480d-9d1b-f5a10f5b62f4" containerName="docker-build" Dec 12 00:32:50 crc kubenswrapper[4917]: E1212 00:32:50.093708 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641f61fa-3761-480d-9d1b-f5a10f5b62f4" containerName="manage-dockerfile" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093714 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="641f61fa-3761-480d-9d1b-f5a10f5b62f4" containerName="manage-dockerfile" Dec 12 00:32:50 crc kubenswrapper[4917]: E1212 00:32:50.093722 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" containerName="extract-content" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093728 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" containerName="extract-content" Dec 12 00:32:50 crc kubenswrapper[4917]: E1212 00:32:50.093741 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" containerName="extract-utilities" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093747 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" containerName="extract-utilities" Dec 12 00:32:50 crc kubenswrapper[4917]: E1212 00:32:50.093757 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" containerName="registry-server" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093763 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" containerName="registry-server" Dec 12 00:32:50 crc kubenswrapper[4917]: E1212 00:32:50.093777 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" containerName="extract-utilities" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093783 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" containerName="extract-utilities" Dec 12 00:32:50 crc kubenswrapper[4917]: E1212 00:32:50.093794 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" containerName="extract-content" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093799 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" containerName="extract-content" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093924 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb12561-9d5f-42b2-a9a8-9bb0e5c8d017" containerName="registry-server" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093936 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef6fc63-019d-4e87-8c37-23ed24ee0020" containerName="registry-server" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.093949 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="641f61fa-3761-480d-9d1b-f5a10f5b62f4" containerName="docker-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.094863 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.098181 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-tfjwq" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.098181 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.098194 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.098197 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.116705 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.250707 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.250795 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.250900 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-pull\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.250927 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.251004 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.251071 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.251105 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-push\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.251153 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669p4\" (UniqueName: \"kubernetes.io/projected/f6696c9a-b033-4e3f-ad3d-abc6da56c750-kube-api-access-669p4\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.251179 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.251241 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.251266 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.251305 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.353197 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.353728 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-push\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.353913 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-669p4\" (UniqueName: \"kubernetes.io/projected/f6696c9a-b033-4e3f-ad3d-abc6da56c750-kube-api-access-669p4\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.354036 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.354185 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.354296 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.354420 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.354467 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.354679 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.354869 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.354955 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.354926 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.355085 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.355120 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.355223 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-pull\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.355316 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.355336 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.355541 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.355661 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.356078 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.356093 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.361324 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-pull\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.361350 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-push\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.376145 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-669p4\" (UniqueName: \"kubernetes.io/projected/f6696c9a-b033-4e3f-ad3d-abc6da56c750-kube-api-access-669p4\") pod \"sg-bridge-1-build\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.423944 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 12 00:32:50 crc kubenswrapper[4917]: I1212 00:32:50.703779 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 12 00:32:51 crc kubenswrapper[4917]: I1212 00:32:51.670072 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"f6696c9a-b033-4e3f-ad3d-abc6da56c750","Type":"ContainerStarted","Data":"a05b6f7592d8afdac50ee8a09d49ab411171d0ed20552899b697d4a668bc88c7"} Dec 12 00:32:55 crc kubenswrapper[4917]: I1212 00:32:55.706435 4917 generic.go:334] "Generic (PLEG): container finished" podID="f6696c9a-b033-4e3f-ad3d-abc6da56c750" containerID="d8da6acec67dfa5c32c10296d1037271f97f96c3306b796caf24f7a552e17450" exitCode=0 Dec 12 00:32:55 crc kubenswrapper[4917]: I1212 00:32:55.706542 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"f6696c9a-b033-4e3f-ad3d-abc6da56c750","Type":"ContainerDied","Data":"d8da6acec67dfa5c32c10296d1037271f97f96c3306b796caf24f7a552e17450"} Dec 12 00:32:56 crc kubenswrapper[4917]: I1212 00:32:56.716565 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"f6696c9a-b033-4e3f-ad3d-abc6da56c750","Type":"ContainerStarted","Data":"3fe5e4529e184d81440e54490966c440d88173603c2f37566e764cdcacbdc8d4"} Dec 12 00:32:56 crc kubenswrapper[4917]: I1212 00:32:56.747308 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=6.747287088 podStartE2EDuration="6.747287088s" podCreationTimestamp="2025-12-12 00:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:32:56.742627425 +0000 UTC m=+1611.520428238" watchObservedRunningTime="2025-12-12 00:32:56.747287088 +0000 UTC m=+1611.525087891" Dec 12 00:32:59 crc kubenswrapper[4917]: I1212 00:32:59.639828 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:32:59 crc kubenswrapper[4917]: I1212 00:32:59.640613 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:33:00 crc kubenswrapper[4917]: I1212 00:33:00.567741 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 12 00:33:00 crc kubenswrapper[4917]: I1212 00:33:00.568143 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="f6696c9a-b033-4e3f-ad3d-abc6da56c750" containerName="docker-build" containerID="cri-o://3fe5e4529e184d81440e54490966c440d88173603c2f37566e764cdcacbdc8d4" gracePeriod=30 Dec 12 00:33:00 crc kubenswrapper[4917]: I1212 00:33:00.764509 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_f6696c9a-b033-4e3f-ad3d-abc6da56c750/docker-build/0.log" Dec 12 00:33:00 crc kubenswrapper[4917]: I1212 00:33:00.765467 4917 generic.go:334] "Generic (PLEG): container finished" podID="f6696c9a-b033-4e3f-ad3d-abc6da56c750" containerID="3fe5e4529e184d81440e54490966c440d88173603c2f37566e764cdcacbdc8d4" exitCode=1 Dec 12 00:33:00 crc kubenswrapper[4917]: I1212 00:33:00.765509 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"f6696c9a-b033-4e3f-ad3d-abc6da56c750","Type":"ContainerDied","Data":"3fe5e4529e184d81440e54490966c440d88173603c2f37566e764cdcacbdc8d4"} Dec 12 00:33:00 crc kubenswrapper[4917]: I1212 00:33:00.972054 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_f6696c9a-b033-4e3f-ad3d-abc6da56c750/docker-build/0.log" Dec 12 00:33:00 crc kubenswrapper[4917]: I1212 00:33:00.972906 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.127742 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildworkdir\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.127824 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-push\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.127860 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-system-configs\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.127887 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-669p4\" (UniqueName: \"kubernetes.io/projected/f6696c9a-b033-4e3f-ad3d-abc6da56c750-kube-api-access-669p4\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.127914 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-proxy-ca-bundles\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.127950 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-blob-cache\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.128018 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-root\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.128041 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-run\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.128060 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-ca-bundles\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.128083 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-node-pullsecrets\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.128104 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildcachedir\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.128121 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-pull\") pod \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\" (UID: \"f6696c9a-b033-4e3f-ad3d-abc6da56c750\") " Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.128506 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.128505 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.128590 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.128759 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.129088 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.129112 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.129334 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.140596 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.140604 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6696c9a-b033-4e3f-ad3d-abc6da56c750-kube-api-access-669p4" (OuterVolumeSpecName: "kube-api-access-669p4") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "kube-api-access-669p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.140715 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.208614 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.211051 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f6696c9a-b033-4e3f-ad3d-abc6da56c750" (UID: "f6696c9a-b033-4e3f-ad3d-abc6da56c750"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229702 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229757 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229775 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-669p4\" (UniqueName: \"kubernetes.io/projected/f6696c9a-b033-4e3f-ad3d-abc6da56c750-kube-api-access-669p4\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229795 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229817 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229836 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229852 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229877 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6696c9a-b033-4e3f-ad3d-abc6da56c750-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229887 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229896 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229904 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/f6696c9a-b033-4e3f-ad3d-abc6da56c750-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.229923 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f6696c9a-b033-4e3f-ad3d-abc6da56c750-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.775890 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_f6696c9a-b033-4e3f-ad3d-abc6da56c750/docker-build/0.log" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.777994 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"f6696c9a-b033-4e3f-ad3d-abc6da56c750","Type":"ContainerDied","Data":"a05b6f7592d8afdac50ee8a09d49ab411171d0ed20552899b697d4a668bc88c7"} Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.778104 4917 scope.go:117] "RemoveContainer" containerID="3fe5e4529e184d81440e54490966c440d88173603c2f37566e764cdcacbdc8d4" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.778120 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.803476 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.809581 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 12 00:33:01 crc kubenswrapper[4917]: I1212 00:33:01.834605 4917 scope.go:117] "RemoveContainer" containerID="d8da6acec67dfa5c32c10296d1037271f97f96c3306b796caf24f7a552e17450" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.141374 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 12 00:33:02 crc kubenswrapper[4917]: E1212 00:33:02.142074 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6696c9a-b033-4e3f-ad3d-abc6da56c750" containerName="docker-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.142108 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6696c9a-b033-4e3f-ad3d-abc6da56c750" containerName="docker-build" Dec 12 00:33:02 crc kubenswrapper[4917]: E1212 00:33:02.142123 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6696c9a-b033-4e3f-ad3d-abc6da56c750" containerName="manage-dockerfile" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.142134 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6696c9a-b033-4e3f-ad3d-abc6da56c750" containerName="manage-dockerfile" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.142276 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6696c9a-b033-4e3f-ad3d-abc6da56c750" containerName="docker-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.144119 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.144997 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.145106 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.145174 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-pull\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.145297 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.145448 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnbnk\" (UniqueName: \"kubernetes.io/projected/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-kube-api-access-mnbnk\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.145504 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.145549 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-push\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.145792 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.146039 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.146237 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.146346 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.146470 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.147186 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.147470 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.147957 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-tfjwq" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.148732 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.161581 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248169 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248269 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248303 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248336 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248381 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248415 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248444 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-pull\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248469 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248504 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnbnk\" (UniqueName: \"kubernetes.io/projected/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-kube-api-access-mnbnk\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248531 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248560 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-push\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.248597 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.249517 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.249694 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.250049 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.250333 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.252126 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.252133 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.252373 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.252443 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.252462 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.262890 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-push\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.262931 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-pull\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.270915 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnbnk\" (UniqueName: \"kubernetes.io/projected/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-kube-api-access-mnbnk\") pod \"sg-bridge-2-build\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.478075 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.703758 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 12 00:33:02 crc kubenswrapper[4917]: I1212 00:33:02.786292 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e","Type":"ContainerStarted","Data":"fb0a8864e8f6e2b955cf609457da969fb41d85db6351bb49ff58e8c954738f09"} Dec 12 00:33:03 crc kubenswrapper[4917]: I1212 00:33:03.611542 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6696c9a-b033-4e3f-ad3d-abc6da56c750" path="/var/lib/kubelet/pods/f6696c9a-b033-4e3f-ad3d-abc6da56c750/volumes" Dec 12 00:33:04 crc kubenswrapper[4917]: I1212 00:33:04.801385 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e","Type":"ContainerStarted","Data":"816a4a0c939ab4307cf7ed7032ac1db45c18178eaf7fbdce08db28cfb32ec3b2"} Dec 12 00:33:05 crc kubenswrapper[4917]: I1212 00:33:05.809358 4917 generic.go:334] "Generic (PLEG): container finished" podID="eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" containerID="816a4a0c939ab4307cf7ed7032ac1db45c18178eaf7fbdce08db28cfb32ec3b2" exitCode=0 Dec 12 00:33:05 crc kubenswrapper[4917]: I1212 00:33:05.809446 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e","Type":"ContainerDied","Data":"816a4a0c939ab4307cf7ed7032ac1db45c18178eaf7fbdce08db28cfb32ec3b2"} Dec 12 00:33:06 crc kubenswrapper[4917]: I1212 00:33:06.820851 4917 generic.go:334] "Generic (PLEG): container finished" podID="eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" containerID="3779c2ff59623e29571a1c8ef384c675274c0b880b5593008910dcf2a8a2ae44" exitCode=0 Dec 12 00:33:06 crc kubenswrapper[4917]: I1212 00:33:06.820966 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e","Type":"ContainerDied","Data":"3779c2ff59623e29571a1c8ef384c675274c0b880b5593008910dcf2a8a2ae44"} Dec 12 00:33:06 crc kubenswrapper[4917]: I1212 00:33:06.856535 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e/manage-dockerfile/0.log" Dec 12 00:33:07 crc kubenswrapper[4917]: I1212 00:33:07.837874 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e","Type":"ContainerStarted","Data":"a6ac3d1c10983844976570cba8634d35452b2c242ea093dfe2ffd51ee927eb43"} Dec 12 00:33:07 crc kubenswrapper[4917]: I1212 00:33:07.872793 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.872767695 podStartE2EDuration="5.872767695s" podCreationTimestamp="2025-12-12 00:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:33:07.866835169 +0000 UTC m=+1622.644635992" watchObservedRunningTime="2025-12-12 00:33:07.872767695 +0000 UTC m=+1622.650568518" Dec 12 00:33:29 crc kubenswrapper[4917]: I1212 00:33:29.639738 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:33:29 crc kubenswrapper[4917]: I1212 00:33:29.640729 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:33:29 crc kubenswrapper[4917]: I1212 00:33:29.640798 4917 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:33:29 crc kubenswrapper[4917]: I1212 00:33:29.641962 4917 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542"} pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 00:33:29 crc kubenswrapper[4917]: I1212 00:33:29.642019 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" containerID="cri-o://e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" gracePeriod=600 Dec 12 00:33:30 crc kubenswrapper[4917]: I1212 00:33:30.005179 4917 generic.go:334] "Generic (PLEG): container finished" podID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" exitCode=0 Dec 12 00:33:30 crc kubenswrapper[4917]: I1212 00:33:30.005618 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerDied","Data":"e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542"} Dec 12 00:33:30 crc kubenswrapper[4917]: I1212 00:33:30.005748 4917 scope.go:117] "RemoveContainer" containerID="c8ec3b647c80a5906eabbf0fa7c865967e32995db04bf0586010c232bc53706c" Dec 12 00:33:30 crc kubenswrapper[4917]: E1212 00:33:30.272751 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:33:31 crc kubenswrapper[4917]: I1212 00:33:31.016567 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:33:31 crc kubenswrapper[4917]: E1212 00:33:31.016865 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:33:44 crc kubenswrapper[4917]: I1212 00:33:44.602919 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:33:44 crc kubenswrapper[4917]: E1212 00:33:44.606802 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:33:59 crc kubenswrapper[4917]: I1212 00:33:59.602740 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:33:59 crc kubenswrapper[4917]: E1212 00:33:59.603855 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:34:05 crc kubenswrapper[4917]: I1212 00:34:05.264783 4917 generic.go:334] "Generic (PLEG): container finished" podID="eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" containerID="a6ac3d1c10983844976570cba8634d35452b2c242ea093dfe2ffd51ee927eb43" exitCode=0 Dec 12 00:34:05 crc kubenswrapper[4917]: I1212 00:34:05.266231 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e","Type":"ContainerDied","Data":"a6ac3d1c10983844976570cba8634d35452b2c242ea093dfe2ffd51ee927eb43"} Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.520819 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.669735 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnbnk\" (UniqueName: \"kubernetes.io/projected/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-kube-api-access-mnbnk\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.669818 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-node-pullsecrets\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.669858 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-push\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.669921 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-ca-bundles\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.669950 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildworkdir\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.670049 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-run\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.670784 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.669984 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.671397 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.671467 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.670186 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-system-configs\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.671800 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-root\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.672478 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.673746 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-blob-cache\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.673859 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-proxy-ca-bundles\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.673895 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-pull\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.673934 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildcachedir\") pod \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\" (UID: \"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e\") " Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.674064 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.674279 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.674441 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.674467 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.674480 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.674496 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.674507 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.674518 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.674529 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.676460 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-kube-api-access-mnbnk" (OuterVolumeSpecName: "kube-api-access-mnbnk") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "kube-api-access-mnbnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.682801 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.682878 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.776633 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.776711 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnbnk\" (UniqueName: \"kubernetes.io/projected/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-kube-api-access-mnbnk\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.776726 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.797788 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:34:06 crc kubenswrapper[4917]: I1212 00:34:06.881554 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:07 crc kubenswrapper[4917]: I1212 00:34:07.281595 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e","Type":"ContainerDied","Data":"fb0a8864e8f6e2b955cf609457da969fb41d85db6351bb49ff58e8c954738f09"} Dec 12 00:34:07 crc kubenswrapper[4917]: I1212 00:34:07.281685 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0a8864e8f6e2b955cf609457da969fb41d85db6351bb49ff58e8c954738f09" Dec 12 00:34:07 crc kubenswrapper[4917]: I1212 00:34:07.282133 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 12 00:34:07 crc kubenswrapper[4917]: I1212 00:34:07.389819 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" (UID: "eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:34:07 crc kubenswrapper[4917]: I1212 00:34:07.490385 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:12 crc kubenswrapper[4917]: I1212 00:34:12.601863 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:34:12 crc kubenswrapper[4917]: E1212 00:34:12.602315 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:34:14 crc kubenswrapper[4917]: I1212 00:34:14.853496 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 12 00:34:14 crc kubenswrapper[4917]: E1212 00:34:14.853871 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" containerName="manage-dockerfile" Dec 12 00:34:14 crc kubenswrapper[4917]: I1212 00:34:14.853888 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" containerName="manage-dockerfile" Dec 12 00:34:14 crc kubenswrapper[4917]: E1212 00:34:14.853907 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" containerName="git-clone" Dec 12 00:34:14 crc kubenswrapper[4917]: I1212 00:34:14.854398 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" containerName="git-clone" Dec 12 00:34:14 crc kubenswrapper[4917]: E1212 00:34:14.854426 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" containerName="docker-build" Dec 12 00:34:14 crc kubenswrapper[4917]: I1212 00:34:14.854432 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" containerName="docker-build" Dec 12 00:34:14 crc kubenswrapper[4917]: I1212 00:34:14.855822 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb99c04e-5e56-4c89-b4d2-e8aa4a49ea6e" containerName="docker-build" Dec 12 00:34:14 crc kubenswrapper[4917]: I1212 00:34:14.858140 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:14 crc kubenswrapper[4917]: I1212 00:34:14.867103 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Dec 12 00:34:14 crc kubenswrapper[4917]: I1212 00:34:14.867354 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-tfjwq" Dec 12 00:34:14 crc kubenswrapper[4917]: I1212 00:34:14.867614 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Dec 12 00:34:14 crc kubenswrapper[4917]: I1212 00:34:14.867862 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Dec 12 00:34:14 crc kubenswrapper[4917]: I1212 00:34:14.887669 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.009766 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.009858 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrvwn\" (UniqueName: \"kubernetes.io/projected/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-kube-api-access-nrvwn\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.009918 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.009941 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.009963 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.010095 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.010203 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.010328 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.010449 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.010485 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.010517 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.010536 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.111965 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112086 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112116 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112141 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112165 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112200 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112236 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrvwn\" (UniqueName: \"kubernetes.io/projected/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-kube-api-access-nrvwn\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112272 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112295 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112319 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112340 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112361 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.112926 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.114064 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.114122 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.114216 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.114340 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.114438 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.114551 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.114755 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.114977 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.120907 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.121350 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.132318 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrvwn\" (UniqueName: \"kubernetes.io/projected/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-kube-api-access-nrvwn\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.187249 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:15 crc kubenswrapper[4917]: I1212 00:34:15.630476 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 12 00:34:16 crc kubenswrapper[4917]: I1212 00:34:16.352352 4917 generic.go:334] "Generic (PLEG): container finished" podID="76478c9e-dbea-4a1d-87aa-fc48ee903d4b" containerID="a7306a620c89d9fc7df3446da41a947329693e1682e9e574e5b7e343312634fd" exitCode=0 Dec 12 00:34:16 crc kubenswrapper[4917]: I1212 00:34:16.352425 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"76478c9e-dbea-4a1d-87aa-fc48ee903d4b","Type":"ContainerDied","Data":"a7306a620c89d9fc7df3446da41a947329693e1682e9e574e5b7e343312634fd"} Dec 12 00:34:16 crc kubenswrapper[4917]: I1212 00:34:16.354008 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"76478c9e-dbea-4a1d-87aa-fc48ee903d4b","Type":"ContainerStarted","Data":"74447514f892aed48271e0fce056ffe00ea5bc194f7d36c541727b3133a0a753"} Dec 12 00:34:17 crc kubenswrapper[4917]: I1212 00:34:17.364438 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"76478c9e-dbea-4a1d-87aa-fc48ee903d4b","Type":"ContainerStarted","Data":"c90f9d61de36f03ad44326e483edc241da966d3b627aecefa9a36478ea7a4895"} Dec 12 00:34:17 crc kubenswrapper[4917]: I1212 00:34:17.396845 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.396823818 podStartE2EDuration="3.396823818s" podCreationTimestamp="2025-12-12 00:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:34:17.393191163 +0000 UTC m=+1692.170991996" watchObservedRunningTime="2025-12-12 00:34:17.396823818 +0000 UTC m=+1692.174624631" Dec 12 00:34:23 crc kubenswrapper[4917]: I1212 00:34:23.815347 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 12 00:34:23 crc kubenswrapper[4917]: I1212 00:34:23.816245 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="76478c9e-dbea-4a1d-87aa-fc48ee903d4b" containerName="docker-build" containerID="cri-o://c90f9d61de36f03ad44326e483edc241da966d3b627aecefa9a36478ea7a4895" gracePeriod=30 Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.498912 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.502524 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.505244 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.506014 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.506964 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.524125 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.575954 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.576071 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.576111 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.576160 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.576267 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.576395 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.576453 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.576526 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.576551 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.576589 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbkxv\" (UniqueName: \"kubernetes.io/projected/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-kube-api-access-zbkxv\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.576661 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.576710 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.678261 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.678333 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.678377 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.678402 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.678439 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.678473 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.679065 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.678511 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.679146 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.679177 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbkxv\" (UniqueName: \"kubernetes.io/projected/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-kube-api-access-zbkxv\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.679210 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.679242 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.679291 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.679283 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.679295 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.679360 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.679408 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.679379 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.680004 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.680172 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.680429 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.684427 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.684430 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.706682 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbkxv\" (UniqueName: \"kubernetes.io/projected/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-kube-api-access-zbkxv\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:25 crc kubenswrapper[4917]: I1212 00:34:25.820267 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:34:26 crc kubenswrapper[4917]: I1212 00:34:26.058614 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 12 00:34:26 crc kubenswrapper[4917]: I1212 00:34:26.459722 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58","Type":"ContainerStarted","Data":"b88c51753194faba643c13c392a931f00b131006d78924d6e6a0b6e3e41aeb92"} Dec 12 00:34:26 crc kubenswrapper[4917]: I1212 00:34:26.601942 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:34:26 crc kubenswrapper[4917]: E1212 00:34:26.602354 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:34:27 crc kubenswrapper[4917]: I1212 00:34:27.468609 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58","Type":"ContainerStarted","Data":"5dec692897aa8e4dfc66b21bda48c67c9958d144e89c66be793c9642f40361a0"} Dec 12 00:34:27 crc kubenswrapper[4917]: I1212 00:34:27.472261 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_76478c9e-dbea-4a1d-87aa-fc48ee903d4b/docker-build/0.log" Dec 12 00:34:27 crc kubenswrapper[4917]: I1212 00:34:27.472861 4917 generic.go:334] "Generic (PLEG): container finished" podID="76478c9e-dbea-4a1d-87aa-fc48ee903d4b" containerID="c90f9d61de36f03ad44326e483edc241da966d3b627aecefa9a36478ea7a4895" exitCode=1 Dec 12 00:34:27 crc kubenswrapper[4917]: I1212 00:34:27.472905 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"76478c9e-dbea-4a1d-87aa-fc48ee903d4b","Type":"ContainerDied","Data":"c90f9d61de36f03ad44326e483edc241da966d3b627aecefa9a36478ea7a4895"} Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.698908 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_76478c9e-dbea-4a1d-87aa-fc48ee903d4b/docker-build/0.log" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.700262 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.778745 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-ca-bundles\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.778814 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-node-pullsecrets\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.778858 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-push\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.778911 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-run\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.778934 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-proxy-ca-bundles\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.779006 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrvwn\" (UniqueName: \"kubernetes.io/projected/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-kube-api-access-nrvwn\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.779030 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildworkdir\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.779113 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-system-configs\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.779187 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-blob-cache\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.779217 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-pull\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.779237 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildcachedir\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.779267 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-root\") pod \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\" (UID: \"76478c9e-dbea-4a1d-87aa-fc48ee903d4b\") " Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.780255 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.780469 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.780502 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.780756 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.781103 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.781424 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.781427 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.786947 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.787385 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-kube-api-access-nrvwn" (OuterVolumeSpecName: "kube-api-access-nrvwn") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "kube-api-access-nrvwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.798920 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.831814 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:34:31 crc kubenswrapper[4917]: E1212 00:34:31.843571 4917 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.15:44932->38.129.56.15:36769: read tcp 38.129.56.15:44932->38.129.56.15:36769: read: connection reset by peer Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.881211 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrvwn\" (UniqueName: \"kubernetes.io/projected/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-kube-api-access-nrvwn\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.881264 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.881276 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.881285 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.881297 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.881306 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.881315 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.881324 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.881334 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.881343 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:31 crc kubenswrapper[4917]: I1212 00:34:31.881353 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:32 crc kubenswrapper[4917]: I1212 00:34:32.190314 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "76478c9e-dbea-4a1d-87aa-fc48ee903d4b" (UID: "76478c9e-dbea-4a1d-87aa-fc48ee903d4b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:34:32 crc kubenswrapper[4917]: I1212 00:34:32.287411 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76478c9e-dbea-4a1d-87aa-fc48ee903d4b-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:34:32 crc kubenswrapper[4917]: I1212 00:34:32.512485 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_76478c9e-dbea-4a1d-87aa-fc48ee903d4b/docker-build/0.log" Dec 12 00:34:32 crc kubenswrapper[4917]: I1212 00:34:32.513074 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"76478c9e-dbea-4a1d-87aa-fc48ee903d4b","Type":"ContainerDied","Data":"74447514f892aed48271e0fce056ffe00ea5bc194f7d36c541727b3133a0a753"} Dec 12 00:34:32 crc kubenswrapper[4917]: I1212 00:34:32.513165 4917 scope.go:117] "RemoveContainer" containerID="c90f9d61de36f03ad44326e483edc241da966d3b627aecefa9a36478ea7a4895" Dec 12 00:34:32 crc kubenswrapper[4917]: I1212 00:34:32.513184 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 12 00:34:32 crc kubenswrapper[4917]: I1212 00:34:32.515131 4917 generic.go:334] "Generic (PLEG): container finished" podID="b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" containerID="5dec692897aa8e4dfc66b21bda48c67c9958d144e89c66be793c9642f40361a0" exitCode=0 Dec 12 00:34:32 crc kubenswrapper[4917]: I1212 00:34:32.515172 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58","Type":"ContainerDied","Data":"5dec692897aa8e4dfc66b21bda48c67c9958d144e89c66be793c9642f40361a0"} Dec 12 00:34:32 crc kubenswrapper[4917]: I1212 00:34:32.562245 4917 scope.go:117] "RemoveContainer" containerID="a7306a620c89d9fc7df3446da41a947329693e1682e9e574e5b7e343312634fd" Dec 12 00:34:32 crc kubenswrapper[4917]: I1212 00:34:32.570038 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 12 00:34:32 crc kubenswrapper[4917]: I1212 00:34:32.576890 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 12 00:34:33 crc kubenswrapper[4917]: I1212 00:34:33.524614 4917 generic.go:334] "Generic (PLEG): container finished" podID="b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" containerID="05bcc43a3bd8226480ad9dc9e764f303a70891c733877a95f00b0b306c7292e9" exitCode=0 Dec 12 00:34:33 crc kubenswrapper[4917]: I1212 00:34:33.524712 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58","Type":"ContainerDied","Data":"05bcc43a3bd8226480ad9dc9e764f303a70891c733877a95f00b0b306c7292e9"} Dec 12 00:34:33 crc kubenswrapper[4917]: I1212 00:34:33.571083 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58/manage-dockerfile/0.log" Dec 12 00:34:33 crc kubenswrapper[4917]: I1212 00:34:33.610310 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76478c9e-dbea-4a1d-87aa-fc48ee903d4b" path="/var/lib/kubelet/pods/76478c9e-dbea-4a1d-87aa-fc48ee903d4b/volumes" Dec 12 00:34:34 crc kubenswrapper[4917]: I1212 00:34:34.547602 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58","Type":"ContainerStarted","Data":"1fa4d0df652e0b4475c140da89c1dcdb438e24391730d4527e3894e8434389c0"} Dec 12 00:34:34 crc kubenswrapper[4917]: I1212 00:34:34.578876 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=9.57883658 podStartE2EDuration="9.57883658s" podCreationTimestamp="2025-12-12 00:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:34:34.577028893 +0000 UTC m=+1709.354829706" watchObservedRunningTime="2025-12-12 00:34:34.57883658 +0000 UTC m=+1709.356637383" Dec 12 00:34:39 crc kubenswrapper[4917]: I1212 00:34:39.602022 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:34:39 crc kubenswrapper[4917]: E1212 00:34:39.602923 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:34:50 crc kubenswrapper[4917]: I1212 00:34:50.602129 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:34:50 crc kubenswrapper[4917]: E1212 00:34:50.603408 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:35:03 crc kubenswrapper[4917]: I1212 00:35:03.602198 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:35:03 crc kubenswrapper[4917]: E1212 00:35:03.603093 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:35:17 crc kubenswrapper[4917]: I1212 00:35:17.602096 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:35:17 crc kubenswrapper[4917]: E1212 00:35:17.602851 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:35:31 crc kubenswrapper[4917]: I1212 00:35:31.606850 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:35:31 crc kubenswrapper[4917]: E1212 00:35:31.608092 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:35:32 crc kubenswrapper[4917]: I1212 00:35:32.990220 4917 generic.go:334] "Generic (PLEG): container finished" podID="b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" containerID="1fa4d0df652e0b4475c140da89c1dcdb438e24391730d4527e3894e8434389c0" exitCode=0 Dec 12 00:35:32 crc kubenswrapper[4917]: I1212 00:35:32.990263 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58","Type":"ContainerDied","Data":"1fa4d0df652e0b4475c140da89c1dcdb438e24391730d4527e3894e8434389c0"} Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.328385 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.365900 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildcachedir\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.365975 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-proxy-ca-bundles\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.366077 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.367102 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.367166 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbkxv\" (UniqueName: \"kubernetes.io/projected/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-kube-api-access-zbkxv\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.368489 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildworkdir\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.368551 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-node-pullsecrets\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.368582 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-root\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.368614 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-pull\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.368702 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.368855 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-system-configs\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.368947 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-run\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.368997 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-push\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.369024 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-ca-bundles\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.369128 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-blob-cache\") pod \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\" (UID: \"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58\") " Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.369622 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.369641 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.369706 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.369787 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.370044 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.371220 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.373281 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.420875 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.420906 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-kube-api-access-zbkxv" (OuterVolumeSpecName: "kube-api-access-zbkxv") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "kube-api-access-zbkxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.420900 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.471712 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbkxv\" (UniqueName: \"kubernetes.io/projected/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-kube-api-access-zbkxv\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.471752 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.471765 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.471778 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.471789 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.471799 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.471810 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.482008 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:35:34 crc kubenswrapper[4917]: I1212 00:35:34.573083 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:35 crc kubenswrapper[4917]: I1212 00:35:35.011024 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58","Type":"ContainerDied","Data":"b88c51753194faba643c13c392a931f00b131006d78924d6e6a0b6e3e41aeb92"} Dec 12 00:35:35 crc kubenswrapper[4917]: I1212 00:35:35.011163 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 12 00:35:35 crc kubenswrapper[4917]: I1212 00:35:35.011790 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b88c51753194faba643c13c392a931f00b131006d78924d6e6a0b6e3e41aeb92" Dec 12 00:35:35 crc kubenswrapper[4917]: I1212 00:35:35.261126 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" (UID: "b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:35:35 crc kubenswrapper[4917]: I1212 00:35:35.284484 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:43 crc kubenswrapper[4917]: I1212 00:35:43.602019 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:35:43 crc kubenswrapper[4917]: E1212 00:35:43.603503 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.581095 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 12 00:35:44 crc kubenswrapper[4917]: E1212 00:35:44.581500 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" containerName="manage-dockerfile" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.581519 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" containerName="manage-dockerfile" Dec 12 00:35:44 crc kubenswrapper[4917]: E1212 00:35:44.581538 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76478c9e-dbea-4a1d-87aa-fc48ee903d4b" containerName="manage-dockerfile" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.581546 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="76478c9e-dbea-4a1d-87aa-fc48ee903d4b" containerName="manage-dockerfile" Dec 12 00:35:44 crc kubenswrapper[4917]: E1212 00:35:44.581556 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" containerName="git-clone" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.581562 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" containerName="git-clone" Dec 12 00:35:44 crc kubenswrapper[4917]: E1212 00:35:44.581571 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" containerName="docker-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.581578 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" containerName="docker-build" Dec 12 00:35:44 crc kubenswrapper[4917]: E1212 00:35:44.581589 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76478c9e-dbea-4a1d-87aa-fc48ee903d4b" containerName="docker-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.581598 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="76478c9e-dbea-4a1d-87aa-fc48ee903d4b" containerName="docker-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.581803 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="76478c9e-dbea-4a1d-87aa-fc48ee903d4b" containerName="docker-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.581821 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="b12f8bdc-329d-43ba-9a3b-ec65d2cc1d58" containerName="docker-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.582873 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.586150 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-sys-config" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.586224 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-ca" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.586357 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-global-ca" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.586669 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-tfjwq" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.666205 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.746193 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.746261 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.747005 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.747055 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.747368 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.747607 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.747794 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.747836 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.747882 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.747930 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.747998 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.748051 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7stm\" (UniqueName: \"kubernetes.io/projected/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-kube-api-access-w7stm\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.849630 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.849731 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7stm\" (UniqueName: \"kubernetes.io/projected/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-kube-api-access-w7stm\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.849766 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.849789 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.849820 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.849845 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.849879 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.849931 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.849962 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.849972 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.850358 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.850405 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.850441 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.849861 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.850595 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.850934 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.851822 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.851824 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.851874 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.851986 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.852217 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.858199 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.858199 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:44 crc kubenswrapper[4917]: I1212 00:35:44.877181 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7stm\" (UniqueName: \"kubernetes.io/projected/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-kube-api-access-w7stm\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:45 crc kubenswrapper[4917]: I1212 00:35:45.000225 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:45 crc kubenswrapper[4917]: I1212 00:35:45.246570 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 12 00:35:46 crc kubenswrapper[4917]: I1212 00:35:46.095628 4917 generic.go:334] "Generic (PLEG): container finished" podID="dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" containerID="638c8f47fc7adaba5f1c14bdfa4a9c3cc04e3b4302fac4a43ac2440a55977899" exitCode=0 Dec 12 00:35:46 crc kubenswrapper[4917]: I1212 00:35:46.095733 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334","Type":"ContainerDied","Data":"638c8f47fc7adaba5f1c14bdfa4a9c3cc04e3b4302fac4a43ac2440a55977899"} Dec 12 00:35:46 crc kubenswrapper[4917]: I1212 00:35:46.096241 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334","Type":"ContainerStarted","Data":"2ac7dd332899838efb6bac3bee07b8c22ab81ca494b3c2c037a131dc55f687bd"} Dec 12 00:35:47 crc kubenswrapper[4917]: I1212 00:35:47.110875 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_dd2d4cc5-ded5-423f-a1fc-6c6bb396a334/docker-build/0.log" Dec 12 00:35:47 crc kubenswrapper[4917]: I1212 00:35:47.112205 4917 generic.go:334] "Generic (PLEG): container finished" podID="dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" containerID="9c84d9666a82fd1f067fcff224c7d2f650cb1259b956147075a83c4cf45b3739" exitCode=1 Dec 12 00:35:47 crc kubenswrapper[4917]: I1212 00:35:47.112270 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334","Type":"ContainerDied","Data":"9c84d9666a82fd1f067fcff224c7d2f650cb1259b956147075a83c4cf45b3739"} Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.362430 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_dd2d4cc5-ded5-423f-a1fc-6c6bb396a334/docker-build/0.log" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.362972 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.504734 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-proxy-ca-bundles\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.504814 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-root\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.504876 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7stm\" (UniqueName: \"kubernetes.io/projected/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-kube-api-access-w7stm\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.504940 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-push\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.504970 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-run\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.505008 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-node-pullsecrets\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.505094 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-system-configs\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.505128 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildworkdir\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.505161 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildcachedir\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.505204 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-pull\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.505260 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-ca-bundles\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.505291 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-blob-cache\") pod \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\" (UID: \"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334\") " Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.505873 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.505868 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.506070 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.506545 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.506670 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.506783 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.507874 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.508480 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.508246 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.513965 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.514147 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-kube-api-access-w7stm" (OuterVolumeSpecName: "kube-api-access-w7stm") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "kube-api-access-w7stm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.514766 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" (UID: "dd2d4cc5-ded5-423f-a1fc-6c6bb396a334"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607007 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607057 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607069 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607081 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607096 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607112 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7stm\" (UniqueName: \"kubernetes.io/projected/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-kube-api-access-w7stm\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607124 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607137 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607149 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607160 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607175 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:48 crc kubenswrapper[4917]: I1212 00:35:48.607217 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:35:49 crc kubenswrapper[4917]: I1212 00:35:49.128214 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_dd2d4cc5-ded5-423f-a1fc-6c6bb396a334/docker-build/0.log" Dec 12 00:35:49 crc kubenswrapper[4917]: I1212 00:35:49.129371 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"dd2d4cc5-ded5-423f-a1fc-6c6bb396a334","Type":"ContainerDied","Data":"2ac7dd332899838efb6bac3bee07b8c22ab81ca494b3c2c037a131dc55f687bd"} Dec 12 00:35:49 crc kubenswrapper[4917]: I1212 00:35:49.129461 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ac7dd332899838efb6bac3bee07b8c22ab81ca494b3c2c037a131dc55f687bd" Dec 12 00:35:49 crc kubenswrapper[4917]: I1212 00:35:49.129557 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 12 00:35:54 crc kubenswrapper[4917]: I1212 00:35:54.602043 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:35:54 crc kubenswrapper[4917]: E1212 00:35:54.603317 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:35:55 crc kubenswrapper[4917]: I1212 00:35:55.066528 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 12 00:35:55 crc kubenswrapper[4917]: I1212 00:35:55.072055 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 12 00:35:55 crc kubenswrapper[4917]: I1212 00:35:55.621530 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" path="/var/lib/kubelet/pods/dd2d4cc5-ded5-423f-a1fc-6c6bb396a334/volumes" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.740246 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 12 00:35:56 crc kubenswrapper[4917]: E1212 00:35:56.740744 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" containerName="docker-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.740765 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" containerName="docker-build" Dec 12 00:35:56 crc kubenswrapper[4917]: E1212 00:35:56.740779 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" containerName="manage-dockerfile" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.740788 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" containerName="manage-dockerfile" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.740989 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2d4cc5-ded5-423f-a1fc-6c6bb396a334" containerName="docker-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.743057 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.746232 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-sys-config" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.747015 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-tfjwq" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.747311 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-ca" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.749396 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-global-ca" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.768669 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.835870 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.835944 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.836002 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.836040 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.836089 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.836120 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8lzp\" (UniqueName: \"kubernetes.io/projected/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-kube-api-access-t8lzp\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.836430 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.836515 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.836642 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.836726 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.836783 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.836912 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.938883 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.938955 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.938994 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939018 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939054 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939080 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8lzp\" (UniqueName: \"kubernetes.io/projected/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-kube-api-access-t8lzp\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939110 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939134 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939134 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939176 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939276 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939295 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939342 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939368 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.939961 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.940211 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.940266 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.940245 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.940840 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.941038 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.941087 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.948574 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.953633 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:56 crc kubenswrapper[4917]: I1212 00:35:56.962183 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8lzp\" (UniqueName: \"kubernetes.io/projected/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-kube-api-access-t8lzp\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:57 crc kubenswrapper[4917]: I1212 00:35:57.065843 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:35:57 crc kubenswrapper[4917]: I1212 00:35:57.470166 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 12 00:35:58 crc kubenswrapper[4917]: I1212 00:35:58.344731 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32","Type":"ContainerStarted","Data":"d888013f36132f62223a2a87eba52cd5a6fc3a3c609c80776d2aa07d1b20ab93"} Dec 12 00:35:58 crc kubenswrapper[4917]: I1212 00:35:58.344788 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32","Type":"ContainerStarted","Data":"96da401e75b49a25f3f77db40e95d055f55f5f94e0e95da0b4cc67474628cfb6"} Dec 12 00:35:59 crc kubenswrapper[4917]: I1212 00:35:59.354350 4917 generic.go:334] "Generic (PLEG): container finished" podID="5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" containerID="d888013f36132f62223a2a87eba52cd5a6fc3a3c609c80776d2aa07d1b20ab93" exitCode=0 Dec 12 00:35:59 crc kubenswrapper[4917]: I1212 00:35:59.354966 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32","Type":"ContainerDied","Data":"d888013f36132f62223a2a87eba52cd5a6fc3a3c609c80776d2aa07d1b20ab93"} Dec 12 00:36:00 crc kubenswrapper[4917]: I1212 00:36:00.364549 4917 generic.go:334] "Generic (PLEG): container finished" podID="5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" containerID="66848100e9a31035697b04ed8b38885ee523899d707ad19b1fcfb21fdf89c062" exitCode=0 Dec 12 00:36:00 crc kubenswrapper[4917]: I1212 00:36:00.364628 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32","Type":"ContainerDied","Data":"66848100e9a31035697b04ed8b38885ee523899d707ad19b1fcfb21fdf89c062"} Dec 12 00:36:00 crc kubenswrapper[4917]: I1212 00:36:00.409869 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_5ea1d037-d7c5-4db4-8c5a-6c3c70daca32/manage-dockerfile/0.log" Dec 12 00:36:01 crc kubenswrapper[4917]: I1212 00:36:01.381507 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32","Type":"ContainerStarted","Data":"facdbdcb23b6bf1b3bb2bcc263c48251b55e0e0a13cc30d1391a44ba2d3cba7b"} Dec 12 00:36:01 crc kubenswrapper[4917]: I1212 00:36:01.416413 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-bundle-2-build" podStartSLOduration=5.41638397 podStartE2EDuration="5.41638397s" podCreationTimestamp="2025-12-12 00:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:36:01.415800535 +0000 UTC m=+1796.193601368" watchObservedRunningTime="2025-12-12 00:36:01.41638397 +0000 UTC m=+1796.194184783" Dec 12 00:36:05 crc kubenswrapper[4917]: I1212 00:36:05.460421 4917 generic.go:334] "Generic (PLEG): container finished" podID="5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" containerID="facdbdcb23b6bf1b3bb2bcc263c48251b55e0e0a13cc30d1391a44ba2d3cba7b" exitCode=0 Dec 12 00:36:05 crc kubenswrapper[4917]: I1212 00:36:05.460518 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32","Type":"ContainerDied","Data":"facdbdcb23b6bf1b3bb2bcc263c48251b55e0e0a13cc30d1391a44ba2d3cba7b"} Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.801336 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.836301 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-blob-cache\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.840274 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.937855 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-push\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938292 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildcachedir\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938330 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-pull\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938363 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-ca-bundles\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938396 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-node-pullsecrets\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938425 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8lzp\" (UniqueName: \"kubernetes.io/projected/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-kube-api-access-t8lzp\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938450 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-run\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938495 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-system-configs\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938521 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-root\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938546 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildworkdir\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938569 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-proxy-ca-bundles\") pod \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\" (UID: \"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32\") " Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938786 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.938897 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.939455 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.939485 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.939497 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.939592 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.939616 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.939630 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.939855 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.939963 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.946208 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-kube-api-access-t8lzp" (OuterVolumeSpecName: "kube-api-access-t8lzp") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "kube-api-access-t8lzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.946980 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.947117 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:06 crc kubenswrapper[4917]: I1212 00:36:06.948018 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" (UID: "5ea1d037-d7c5-4db4-8c5a-6c3c70daca32"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.040428 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.040842 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.040981 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.041069 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.041128 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.041183 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.041248 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.041335 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.041399 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8lzp\" (UniqueName: \"kubernetes.io/projected/5ea1d037-d7c5-4db4-8c5a-6c3c70daca32-kube-api-access-t8lzp\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.550312 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"5ea1d037-d7c5-4db4-8c5a-6c3c70daca32","Type":"ContainerDied","Data":"96da401e75b49a25f3f77db40e95d055f55f5f94e0e95da0b4cc67474628cfb6"} Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.550380 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96da401e75b49a25f3f77db40e95d055f55f5f94e0e95da0b4cc67474628cfb6" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.550502 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 12 00:36:07 crc kubenswrapper[4917]: I1212 00:36:07.602016 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:36:07 crc kubenswrapper[4917]: E1212 00:36:07.602304 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.849741 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 12 00:36:10 crc kubenswrapper[4917]: E1212 00:36:10.851605 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" containerName="manage-dockerfile" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.851726 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" containerName="manage-dockerfile" Dec 12 00:36:10 crc kubenswrapper[4917]: E1212 00:36:10.851800 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" containerName="git-clone" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.851864 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" containerName="git-clone" Dec 12 00:36:10 crc kubenswrapper[4917]: E1212 00:36:10.851935 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" containerName="docker-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.852014 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" containerName="docker-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.852222 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea1d037-d7c5-4db4-8c5a-6c3c70daca32" containerName="docker-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.853133 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.855544 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-ca" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.855894 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-sys-config" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.856106 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-global-ca" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.859553 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-tfjwq" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.896421 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.904942 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.905138 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.905302 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.905450 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.905497 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.905520 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.905546 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.905568 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.905597 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.905714 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.905753 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrjw9\" (UniqueName: \"kubernetes.io/projected/7ae2904c-ec22-4ebb-865b-3ac99848080a-kube-api-access-hrjw9\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:10 crc kubenswrapper[4917]: I1212 00:36:10.905805 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.006520 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.006594 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.006625 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.006688 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.006722 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.006748 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.006906 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.006974 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.007167 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.007263 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.007316 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrjw9\" (UniqueName: \"kubernetes.io/projected/7ae2904c-ec22-4ebb-865b-3ac99848080a-kube-api-access-hrjw9\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.007369 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.007390 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.007403 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.007449 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.007466 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.007443 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.008003 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.008923 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.008926 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.009971 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.014270 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.022904 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.028267 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrjw9\" (UniqueName: \"kubernetes.io/projected/7ae2904c-ec22-4ebb-865b-3ac99848080a-kube-api-access-hrjw9\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.179260 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.422074 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 12 00:36:11 crc kubenswrapper[4917]: I1212 00:36:11.583688 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"7ae2904c-ec22-4ebb-865b-3ac99848080a","Type":"ContainerStarted","Data":"24bca2f4a4aff4686b95497b33e35f10ece36e79687c42856ccc1ff1a053e3ea"} Dec 12 00:36:12 crc kubenswrapper[4917]: I1212 00:36:12.593625 4917 generic.go:334] "Generic (PLEG): container finished" podID="7ae2904c-ec22-4ebb-865b-3ac99848080a" containerID="7a50badbdd6e0dd26c0ca7e923491d00d172eb795018c44a10e3e14dd7e9d5be" exitCode=0 Dec 12 00:36:12 crc kubenswrapper[4917]: I1212 00:36:12.593698 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"7ae2904c-ec22-4ebb-865b-3ac99848080a","Type":"ContainerDied","Data":"7a50badbdd6e0dd26c0ca7e923491d00d172eb795018c44a10e3e14dd7e9d5be"} Dec 12 00:36:13 crc kubenswrapper[4917]: I1212 00:36:13.606377 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_7ae2904c-ec22-4ebb-865b-3ac99848080a/docker-build/0.log" Dec 12 00:36:13 crc kubenswrapper[4917]: I1212 00:36:13.607714 4917 generic.go:334] "Generic (PLEG): container finished" podID="7ae2904c-ec22-4ebb-865b-3ac99848080a" containerID="06a3dd36bd2e19c807dc246055ef56762325aed48add0d4b233bcfd1f3b28099" exitCode=1 Dec 12 00:36:13 crc kubenswrapper[4917]: I1212 00:36:13.614099 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"7ae2904c-ec22-4ebb-865b-3ac99848080a","Type":"ContainerDied","Data":"06a3dd36bd2e19c807dc246055ef56762325aed48add0d4b233bcfd1f3b28099"} Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.862717 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_7ae2904c-ec22-4ebb-865b-3ac99848080a/docker-build/0.log" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.863936 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.972867 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildworkdir\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.972931 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-pull\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.973002 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-node-pullsecrets\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.973170 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.973443 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.974268 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-system-configs\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.974408 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildcachedir\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.974471 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrjw9\" (UniqueName: \"kubernetes.io/projected/7ae2904c-ec22-4ebb-865b-3ac99848080a-kube-api-access-hrjw9\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.974518 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-run\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.974556 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-root\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.974588 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.974593 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-push\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.974628 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.974698 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-ca-bundles\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.974776 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-proxy-ca-bundles\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.974819 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-blob-cache\") pod \"7ae2904c-ec22-4ebb-865b-3ac99848080a\" (UID: \"7ae2904c-ec22-4ebb-865b-3ac99848080a\") " Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.975356 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.975409 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.975423 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.975435 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.975445 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.975617 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.975672 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.976081 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.976380 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.980806 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.981201 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:36:14 crc kubenswrapper[4917]: I1212 00:36:14.981242 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae2904c-ec22-4ebb-865b-3ac99848080a-kube-api-access-hrjw9" (OuterVolumeSpecName: "kube-api-access-hrjw9") pod "7ae2904c-ec22-4ebb-865b-3ac99848080a" (UID: "7ae2904c-ec22-4ebb-865b-3ac99848080a"). InnerVolumeSpecName "kube-api-access-hrjw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.082876 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.082944 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrjw9\" (UniqueName: \"kubernetes.io/projected/7ae2904c-ec22-4ebb-865b-3ac99848080a-kube-api-access-hrjw9\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.082958 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.082970 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.082983 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/7ae2904c-ec22-4ebb-865b-3ac99848080a-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.082997 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.083008 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.083020 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ae2904c-ec22-4ebb-865b-3ac99848080a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.622965 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_7ae2904c-ec22-4ebb-865b-3ac99848080a/docker-build/0.log" Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.623553 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"7ae2904c-ec22-4ebb-865b-3ac99848080a","Type":"ContainerDied","Data":"24bca2f4a4aff4686b95497b33e35f10ece36e79687c42856ccc1ff1a053e3ea"} Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.623622 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24bca2f4a4aff4686b95497b33e35f10ece36e79687c42856ccc1ff1a053e3ea" Dec 12 00:36:15 crc kubenswrapper[4917]: I1212 00:36:15.623627 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 12 00:36:18 crc kubenswrapper[4917]: I1212 00:36:18.602378 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:36:18 crc kubenswrapper[4917]: E1212 00:36:18.603279 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:36:21 crc kubenswrapper[4917]: I1212 00:36:21.841159 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 12 00:36:21 crc kubenswrapper[4917]: I1212 00:36:21.850151 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.458633 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 12 00:36:23 crc kubenswrapper[4917]: E1212 00:36:23.459379 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae2904c-ec22-4ebb-865b-3ac99848080a" containerName="manage-dockerfile" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.459394 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae2904c-ec22-4ebb-865b-3ac99848080a" containerName="manage-dockerfile" Dec 12 00:36:23 crc kubenswrapper[4917]: E1212 00:36:23.459408 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae2904c-ec22-4ebb-865b-3ac99848080a" containerName="docker-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.459414 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae2904c-ec22-4ebb-865b-3ac99848080a" containerName="docker-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.459528 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae2904c-ec22-4ebb-865b-3ac99848080a" containerName="docker-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.460522 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.463818 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-ca" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.464193 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-tfjwq" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.464238 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-global-ca" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.463828 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-sys-config" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.482731 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567006 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567064 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567104 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567125 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567153 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567171 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567189 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567437 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wntmp\" (UniqueName: \"kubernetes.io/projected/bea4f498-d5fb-4993-a2c0-9dca92cf681e-kube-api-access-wntmp\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567522 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567759 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567834 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.567915 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.611353 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae2904c-ec22-4ebb-865b-3ac99848080a" path="/var/lib/kubelet/pods/7ae2904c-ec22-4ebb-865b-3ac99848080a/volumes" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669199 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wntmp\" (UniqueName: \"kubernetes.io/projected/bea4f498-d5fb-4993-a2c0-9dca92cf681e-kube-api-access-wntmp\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669260 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669296 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669317 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669347 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669350 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669377 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669394 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669422 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669424 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669452 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669478 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669499 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.669524 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.670963 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.671102 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.671268 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.671352 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.671541 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.672674 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.673153 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.683023 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.686053 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.689878 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wntmp\" (UniqueName: \"kubernetes.io/projected/bea4f498-d5fb-4993-a2c0-9dca92cf681e-kube-api-access-wntmp\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:23 crc kubenswrapper[4917]: I1212 00:36:23.786063 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:24 crc kubenswrapper[4917]: I1212 00:36:24.008526 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 12 00:36:24 crc kubenswrapper[4917]: I1212 00:36:24.713795 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bea4f498-d5fb-4993-a2c0-9dca92cf681e","Type":"ContainerStarted","Data":"c5d5234dc6ffed45615f0bb416936008673c3e6d351bf3a98d3cd0e93b9fc120"} Dec 12 00:36:25 crc kubenswrapper[4917]: I1212 00:36:25.721676 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bea4f498-d5fb-4993-a2c0-9dca92cf681e","Type":"ContainerStarted","Data":"4950cec8204fd6f06b2c0dfe007d477748145a9315c5b3c561dfd141c937e234"} Dec 12 00:36:26 crc kubenswrapper[4917]: I1212 00:36:26.729042 4917 generic.go:334] "Generic (PLEG): container finished" podID="bea4f498-d5fb-4993-a2c0-9dca92cf681e" containerID="4950cec8204fd6f06b2c0dfe007d477748145a9315c5b3c561dfd141c937e234" exitCode=0 Dec 12 00:36:26 crc kubenswrapper[4917]: I1212 00:36:26.729089 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bea4f498-d5fb-4993-a2c0-9dca92cf681e","Type":"ContainerDied","Data":"4950cec8204fd6f06b2c0dfe007d477748145a9315c5b3c561dfd141c937e234"} Dec 12 00:36:27 crc kubenswrapper[4917]: I1212 00:36:27.739709 4917 generic.go:334] "Generic (PLEG): container finished" podID="bea4f498-d5fb-4993-a2c0-9dca92cf681e" containerID="02d69592bb353406be27ca2d938a90ae11d491239ba56aaea1c67c1e7586adf5" exitCode=0 Dec 12 00:36:27 crc kubenswrapper[4917]: I1212 00:36:27.739866 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bea4f498-d5fb-4993-a2c0-9dca92cf681e","Type":"ContainerDied","Data":"02d69592bb353406be27ca2d938a90ae11d491239ba56aaea1c67c1e7586adf5"} Dec 12 00:36:27 crc kubenswrapper[4917]: I1212 00:36:27.784552 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_bea4f498-d5fb-4993-a2c0-9dca92cf681e/manage-dockerfile/0.log" Dec 12 00:36:28 crc kubenswrapper[4917]: I1212 00:36:28.750395 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bea4f498-d5fb-4993-a2c0-9dca92cf681e","Type":"ContainerStarted","Data":"5150d4df2a21e5d17eb46adf5c40288299fb462c27418f230bc7e8d40717d6ac"} Dec 12 00:36:28 crc kubenswrapper[4917]: I1212 00:36:28.787529 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=5.787502619 podStartE2EDuration="5.787502619s" podCreationTimestamp="2025-12-12 00:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:36:28.785541548 +0000 UTC m=+1823.563342381" watchObservedRunningTime="2025-12-12 00:36:28.787502619 +0000 UTC m=+1823.565303432" Dec 12 00:36:30 crc kubenswrapper[4917]: I1212 00:36:30.767927 4917 generic.go:334] "Generic (PLEG): container finished" podID="bea4f498-d5fb-4993-a2c0-9dca92cf681e" containerID="5150d4df2a21e5d17eb46adf5c40288299fb462c27418f230bc7e8d40717d6ac" exitCode=0 Dec 12 00:36:30 crc kubenswrapper[4917]: I1212 00:36:30.767994 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bea4f498-d5fb-4993-a2c0-9dca92cf681e","Type":"ContainerDied","Data":"5150d4df2a21e5d17eb46adf5c40288299fb462c27418f230bc7e8d40717d6ac"} Dec 12 00:36:31 crc kubenswrapper[4917]: I1212 00:36:31.603194 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:36:31 crc kubenswrapper[4917]: E1212 00:36:31.603601 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.244307 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.333724 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildcachedir\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.333829 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-pull\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.333870 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-proxy-ca-bundles\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.333876 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.333911 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-root\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.333953 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-system-configs\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.334914 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.334922 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.335331 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wntmp\" (UniqueName: \"kubernetes.io/projected/bea4f498-d5fb-4993-a2c0-9dca92cf681e-kube-api-access-wntmp\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.335372 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildworkdir\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.335396 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-run\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.335445 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-node-pullsecrets\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.335479 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-ca-bundles\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.335514 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-blob-cache\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.335559 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-push\") pod \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\" (UID: \"bea4f498-d5fb-4993-a2c0-9dca92cf681e\") " Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.335937 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.336525 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.336546 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.336556 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.336565 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.336785 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.337891 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.338493 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.339589 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.342585 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.343682 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.343905 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea4f498-d5fb-4993-a2c0-9dca92cf681e-kube-api-access-wntmp" (OuterVolumeSpecName: "kube-api-access-wntmp") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "kube-api-access-wntmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.347566 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "bea4f498-d5fb-4993-a2c0-9dca92cf681e" (UID: "bea4f498-d5fb-4993-a2c0-9dca92cf681e"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.437944 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.438001 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.438014 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.438030 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/bea4f498-d5fb-4993-a2c0-9dca92cf681e-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.438039 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.438048 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wntmp\" (UniqueName: \"kubernetes.io/projected/bea4f498-d5fb-4993-a2c0-9dca92cf681e-kube-api-access-wntmp\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.438062 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.438071 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bea4f498-d5fb-4993-a2c0-9dca92cf681e-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.788976 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bea4f498-d5fb-4993-a2c0-9dca92cf681e","Type":"ContainerDied","Data":"c5d5234dc6ffed45615f0bb416936008673c3e6d351bf3a98d3cd0e93b9fc120"} Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.789039 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d5234dc6ffed45615f0bb416936008673c3e6d351bf3a98d3cd0e93b9fc120" Dec 12 00:36:32 crc kubenswrapper[4917]: I1212 00:36:32.789148 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 12 00:36:46 crc kubenswrapper[4917]: I1212 00:36:46.603004 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:36:46 crc kubenswrapper[4917]: E1212 00:36:46.604460 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.425712 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 00:36:49 crc kubenswrapper[4917]: E1212 00:36:49.426102 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea4f498-d5fb-4993-a2c0-9dca92cf681e" containerName="git-clone" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.426122 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea4f498-d5fb-4993-a2c0-9dca92cf681e" containerName="git-clone" Dec 12 00:36:49 crc kubenswrapper[4917]: E1212 00:36:49.426140 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea4f498-d5fb-4993-a2c0-9dca92cf681e" containerName="docker-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.426147 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea4f498-d5fb-4993-a2c0-9dca92cf681e" containerName="docker-build" Dec 12 00:36:49 crc kubenswrapper[4917]: E1212 00:36:49.426157 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea4f498-d5fb-4993-a2c0-9dca92cf681e" containerName="manage-dockerfile" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.426169 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea4f498-d5fb-4993-a2c0-9dca92cf681e" containerName="manage-dockerfile" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.426324 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea4f498-d5fb-4993-a2c0-9dca92cf681e" containerName="docker-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.427701 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.431146 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.431191 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.431752 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.432615 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-tfjwq" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.433337 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.455089 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.502306 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.502395 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.502485 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.502535 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.502702 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.502797 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.502835 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.502970 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.503038 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.503084 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j2p8\" (UniqueName: \"kubernetes.io/projected/1c8bda86-77fc-4960-bceb-38f59b36887d-kube-api-access-5j2p8\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.503159 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.503238 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.503295 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604586 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604692 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604718 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j2p8\" (UniqueName: \"kubernetes.io/projected/1c8bda86-77fc-4960-bceb-38f59b36887d-kube-api-access-5j2p8\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604736 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604760 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604829 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604858 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604885 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604915 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604943 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604974 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.604992 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.605013 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.605576 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.606094 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.606442 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.606594 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.606737 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.607361 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.607566 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.608914 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.609736 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.615967 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.616155 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.623292 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.631001 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j2p8\" (UniqueName: \"kubernetes.io/projected/1c8bda86-77fc-4960-bceb-38f59b36887d-kube-api-access-5j2p8\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:49 crc kubenswrapper[4917]: I1212 00:36:49.748239 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:36:50 crc kubenswrapper[4917]: I1212 00:36:50.022041 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 00:36:50 crc kubenswrapper[4917]: I1212 00:36:50.936525 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"1c8bda86-77fc-4960-bceb-38f59b36887d","Type":"ContainerStarted","Data":"0083a6aa2c4cf5e42f536cca0875a827119fe894ff34cb882547f0a22bf29c80"} Dec 12 00:36:50 crc kubenswrapper[4917]: I1212 00:36:50.937259 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"1c8bda86-77fc-4960-bceb-38f59b36887d","Type":"ContainerStarted","Data":"3de823fa5647b0d60af59587909e898fc35aaf2912d6261be4ef1f0eb286bf7c"} Dec 12 00:36:51 crc kubenswrapper[4917]: I1212 00:36:51.947405 4917 generic.go:334] "Generic (PLEG): container finished" podID="1c8bda86-77fc-4960-bceb-38f59b36887d" containerID="0083a6aa2c4cf5e42f536cca0875a827119fe894ff34cb882547f0a22bf29c80" exitCode=0 Dec 12 00:36:51 crc kubenswrapper[4917]: I1212 00:36:51.947502 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"1c8bda86-77fc-4960-bceb-38f59b36887d","Type":"ContainerDied","Data":"0083a6aa2c4cf5e42f536cca0875a827119fe894ff34cb882547f0a22bf29c80"} Dec 12 00:36:52 crc kubenswrapper[4917]: I1212 00:36:52.958784 4917 generic.go:334] "Generic (PLEG): container finished" podID="1c8bda86-77fc-4960-bceb-38f59b36887d" containerID="8083fbf4be61b97529a610bab60a78cf57ca6cfe2342aaa46073aec0e37e1e08" exitCode=0 Dec 12 00:36:52 crc kubenswrapper[4917]: I1212 00:36:52.958883 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"1c8bda86-77fc-4960-bceb-38f59b36887d","Type":"ContainerDied","Data":"8083fbf4be61b97529a610bab60a78cf57ca6cfe2342aaa46073aec0e37e1e08"} Dec 12 00:36:53 crc kubenswrapper[4917]: I1212 00:36:53.003993 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_1c8bda86-77fc-4960-bceb-38f59b36887d/manage-dockerfile/0.log" Dec 12 00:36:53 crc kubenswrapper[4917]: I1212 00:36:53.972927 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"1c8bda86-77fc-4960-bceb-38f59b36887d","Type":"ContainerStarted","Data":"0a49106dae0303a442a217aaedaf58d8925759337362a934bdb4f3ccf456d9a2"} Dec 12 00:36:54 crc kubenswrapper[4917]: I1212 00:36:54.008169 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=5.008142255 podStartE2EDuration="5.008142255s" podCreationTimestamp="2025-12-12 00:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:36:54.007286283 +0000 UTC m=+1848.785087116" watchObservedRunningTime="2025-12-12 00:36:54.008142255 +0000 UTC m=+1848.785943078" Dec 12 00:36:59 crc kubenswrapper[4917]: I1212 00:36:59.602842 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:36:59 crc kubenswrapper[4917]: E1212 00:36:59.603504 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:37:10 crc kubenswrapper[4917]: I1212 00:37:10.602535 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:37:10 crc kubenswrapper[4917]: E1212 00:37:10.603555 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:37:24 crc kubenswrapper[4917]: I1212 00:37:24.601872 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:37:24 crc kubenswrapper[4917]: E1212 00:37:24.602730 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:37:35 crc kubenswrapper[4917]: I1212 00:37:35.283177 4917 generic.go:334] "Generic (PLEG): container finished" podID="1c8bda86-77fc-4960-bceb-38f59b36887d" containerID="0a49106dae0303a442a217aaedaf58d8925759337362a934bdb4f3ccf456d9a2" exitCode=0 Dec 12 00:37:35 crc kubenswrapper[4917]: I1212 00:37:35.283272 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"1c8bda86-77fc-4960-bceb-38f59b36887d","Type":"ContainerDied","Data":"0a49106dae0303a442a217aaedaf58d8925759337362a934bdb4f3ccf456d9a2"} Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.605162 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703552 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703611 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-pull\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703631 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-buildcachedir\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703685 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-buildworkdir\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703749 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-root\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703767 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-ca-bundles\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703790 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-build-blob-cache\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703832 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j2p8\" (UniqueName: \"kubernetes.io/projected/1c8bda86-77fc-4960-bceb-38f59b36887d-kube-api-access-5j2p8\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703868 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-push\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703889 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-run\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703906 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-proxy-ca-bundles\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703928 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-node-pullsecrets\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.703974 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-system-configs\") pod \"1c8bda86-77fc-4960-bceb-38f59b36887d\" (UID: \"1c8bda86-77fc-4960-bceb-38f59b36887d\") " Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.706441 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.706443 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.706491 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.707319 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.708114 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.709433 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.712170 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8bda86-77fc-4960-bceb-38f59b36887d-kube-api-access-5j2p8" (OuterVolumeSpecName: "kube-api-access-5j2p8") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "kube-api-access-5j2p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.713114 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.713832 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.715445 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-pull" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-pull") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "builder-dockercfg-tfjwq-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.715893 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-push" (OuterVolumeSpecName: "builder-dockercfg-tfjwq-push") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "builder-dockercfg-tfjwq-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.809385 4917 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.809434 4917 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.809443 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j2p8\" (UniqueName: \"kubernetes.io/projected/1c8bda86-77fc-4960-bceb-38f59b36887d-kube-api-access-5j2p8\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.809453 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-push\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-push\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.809464 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.809473 4917 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.809481 4917 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.809489 4917 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1c8bda86-77fc-4960-bceb-38f59b36887d-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.809498 4917 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.809508 4917 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-tfjwq-pull\" (UniqueName: \"kubernetes.io/secret/1c8bda86-77fc-4960-bceb-38f59b36887d-builder-dockercfg-tfjwq-pull\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:36 crc kubenswrapper[4917]: I1212 00:37:36.809516 4917 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1c8bda86-77fc-4960-bceb-38f59b36887d-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:37 crc kubenswrapper[4917]: I1212 00:37:37.308258 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"1c8bda86-77fc-4960-bceb-38f59b36887d","Type":"ContainerDied","Data":"3de823fa5647b0d60af59587909e898fc35aaf2912d6261be4ef1f0eb286bf7c"} Dec 12 00:37:37 crc kubenswrapper[4917]: I1212 00:37:37.308304 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3de823fa5647b0d60af59587909e898fc35aaf2912d6261be4ef1f0eb286bf7c" Dec 12 00:37:37 crc kubenswrapper[4917]: I1212 00:37:37.308407 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 00:37:37 crc kubenswrapper[4917]: I1212 00:37:37.764712 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:37:37 crc kubenswrapper[4917]: I1212 00:37:37.823913 4917 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.601819 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:37:38 crc kubenswrapper[4917]: E1212 00:37:38.602237 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.787436 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rbmhv"] Dec 12 00:37:38 crc kubenswrapper[4917]: E1212 00:37:38.788125 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bda86-77fc-4960-bceb-38f59b36887d" containerName="manage-dockerfile" Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.788152 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bda86-77fc-4960-bceb-38f59b36887d" containerName="manage-dockerfile" Dec 12 00:37:38 crc kubenswrapper[4917]: E1212 00:37:38.788167 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bda86-77fc-4960-bceb-38f59b36887d" containerName="git-clone" Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.788174 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bda86-77fc-4960-bceb-38f59b36887d" containerName="git-clone" Dec 12 00:37:38 crc kubenswrapper[4917]: E1212 00:37:38.788184 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bda86-77fc-4960-bceb-38f59b36887d" containerName="docker-build" Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.788192 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bda86-77fc-4960-bceb-38f59b36887d" containerName="docker-build" Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.788354 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bda86-77fc-4960-bceb-38f59b36887d" containerName="docker-build" Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.788983 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-rbmhv" Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.799599 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rbmhv"] Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.801849 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-operators-dockercfg-zjvwb" Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.841332 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w45t2\" (UniqueName: \"kubernetes.io/projected/7a917444-6099-42f4-b08c-2ba795dc78bd-kube-api-access-w45t2\") pod \"service-telemetry-framework-operators-rbmhv\" (UID: \"7a917444-6099-42f4-b08c-2ba795dc78bd\") " pod="service-telemetry/service-telemetry-framework-operators-rbmhv" Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.944014 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w45t2\" (UniqueName: \"kubernetes.io/projected/7a917444-6099-42f4-b08c-2ba795dc78bd-kube-api-access-w45t2\") pod \"service-telemetry-framework-operators-rbmhv\" (UID: \"7a917444-6099-42f4-b08c-2ba795dc78bd\") " pod="service-telemetry/service-telemetry-framework-operators-rbmhv" Dec 12 00:37:38 crc kubenswrapper[4917]: I1212 00:37:38.970203 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w45t2\" (UniqueName: \"kubernetes.io/projected/7a917444-6099-42f4-b08c-2ba795dc78bd-kube-api-access-w45t2\") pod \"service-telemetry-framework-operators-rbmhv\" (UID: \"7a917444-6099-42f4-b08c-2ba795dc78bd\") " pod="service-telemetry/service-telemetry-framework-operators-rbmhv" Dec 12 00:37:39 crc kubenswrapper[4917]: I1212 00:37:39.112810 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1c8bda86-77fc-4960-bceb-38f59b36887d" (UID: "1c8bda86-77fc-4960-bceb-38f59b36887d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:37:39 crc kubenswrapper[4917]: I1212 00:37:39.117213 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-rbmhv" Dec 12 00:37:39 crc kubenswrapper[4917]: I1212 00:37:39.160049 4917 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1c8bda86-77fc-4960-bceb-38f59b36887d-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 00:37:39 crc kubenswrapper[4917]: I1212 00:37:39.346236 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rbmhv"] Dec 12 00:37:39 crc kubenswrapper[4917]: I1212 00:37:39.363863 4917 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 00:37:40 crc kubenswrapper[4917]: I1212 00:37:40.332743 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-rbmhv" event={"ID":"7a917444-6099-42f4-b08c-2ba795dc78bd","Type":"ContainerStarted","Data":"abb8367cef9d2cb4e1d97b30b90845ec2ab42446f3c7298d58daee16db7aa275"} Dec 12 00:37:42 crc kubenswrapper[4917]: I1212 00:37:42.710612 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rbmhv"] Dec 12 00:37:42 crc kubenswrapper[4917]: I1212 00:37:42.723073 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-kpqxm"] Dec 12 00:37:42 crc kubenswrapper[4917]: I1212 00:37:42.724495 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-kpqxm" Dec 12 00:37:42 crc kubenswrapper[4917]: I1212 00:37:42.740167 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-kpqxm"] Dec 12 00:37:42 crc kubenswrapper[4917]: I1212 00:37:42.859786 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwpk\" (UniqueName: \"kubernetes.io/projected/d4e5e804-be59-4587-9a54-fbe0cd608604-kube-api-access-9rwpk\") pod \"service-telemetry-framework-operators-kpqxm\" (UID: \"d4e5e804-be59-4587-9a54-fbe0cd608604\") " pod="service-telemetry/service-telemetry-framework-operators-kpqxm" Dec 12 00:37:42 crc kubenswrapper[4917]: I1212 00:37:42.961338 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rwpk\" (UniqueName: \"kubernetes.io/projected/d4e5e804-be59-4587-9a54-fbe0cd608604-kube-api-access-9rwpk\") pod \"service-telemetry-framework-operators-kpqxm\" (UID: \"d4e5e804-be59-4587-9a54-fbe0cd608604\") " pod="service-telemetry/service-telemetry-framework-operators-kpqxm" Dec 12 00:37:42 crc kubenswrapper[4917]: I1212 00:37:42.987249 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rwpk\" (UniqueName: \"kubernetes.io/projected/d4e5e804-be59-4587-9a54-fbe0cd608604-kube-api-access-9rwpk\") pod \"service-telemetry-framework-operators-kpqxm\" (UID: \"d4e5e804-be59-4587-9a54-fbe0cd608604\") " pod="service-telemetry/service-telemetry-framework-operators-kpqxm" Dec 12 00:37:43 crc kubenswrapper[4917]: I1212 00:37:43.043924 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-kpqxm" Dec 12 00:37:43 crc kubenswrapper[4917]: I1212 00:37:43.292905 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-kpqxm"] Dec 12 00:37:43 crc kubenswrapper[4917]: I1212 00:37:43.377323 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-kpqxm" event={"ID":"d4e5e804-be59-4587-9a54-fbe0cd608604","Type":"ContainerStarted","Data":"a07cda504ff40c48103b98efeb039f2c2961d533287141dbdcde70b2f9547af8"} Dec 12 00:37:50 crc kubenswrapper[4917]: I1212 00:37:50.602364 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:37:50 crc kubenswrapper[4917]: E1212 00:37:50.602636 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:37:59 crc kubenswrapper[4917]: E1212 00:37:59.056385 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 00:37:59 crc kubenswrapper[4917]: E1212 00:37:59.057687 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9rwpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-framework-operators-kpqxm_service-telemetry(d4e5e804-be59-4587-9a54-fbe0cd608604): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 00:37:59 crc kubenswrapper[4917]: E1212 00:37:59.058933 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-framework-operators-kpqxm" podUID="d4e5e804-be59-4587-9a54-fbe0cd608604" Dec 12 00:37:59 crc kubenswrapper[4917]: E1212 00:37:59.085918 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 00:37:59 crc kubenswrapper[4917]: E1212 00:37:59.087206 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w45t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-framework-operators-rbmhv_service-telemetry(7a917444-6099-42f4-b08c-2ba795dc78bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 00:37:59 crc kubenswrapper[4917]: E1212 00:37:59.088496 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-framework-operators-rbmhv" podUID="7a917444-6099-42f4-b08c-2ba795dc78bd" Dec 12 00:37:59 crc kubenswrapper[4917]: E1212 00:37:59.509255 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/service-telemetry-framework-operators-kpqxm" podUID="d4e5e804-be59-4587-9a54-fbe0cd608604" Dec 12 00:37:59 crc kubenswrapper[4917]: I1212 00:37:59.781585 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-rbmhv" Dec 12 00:37:59 crc kubenswrapper[4917]: I1212 00:37:59.851584 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w45t2\" (UniqueName: \"kubernetes.io/projected/7a917444-6099-42f4-b08c-2ba795dc78bd-kube-api-access-w45t2\") pod \"7a917444-6099-42f4-b08c-2ba795dc78bd\" (UID: \"7a917444-6099-42f4-b08c-2ba795dc78bd\") " Dec 12 00:37:59 crc kubenswrapper[4917]: I1212 00:37:59.860114 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a917444-6099-42f4-b08c-2ba795dc78bd-kube-api-access-w45t2" (OuterVolumeSpecName: "kube-api-access-w45t2") pod "7a917444-6099-42f4-b08c-2ba795dc78bd" (UID: "7a917444-6099-42f4-b08c-2ba795dc78bd"). InnerVolumeSpecName "kube-api-access-w45t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:37:59 crc kubenswrapper[4917]: I1212 00:37:59.953789 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w45t2\" (UniqueName: \"kubernetes.io/projected/7a917444-6099-42f4-b08c-2ba795dc78bd-kube-api-access-w45t2\") on node \"crc\" DevicePath \"\"" Dec 12 00:38:00 crc kubenswrapper[4917]: I1212 00:38:00.516762 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-rbmhv" event={"ID":"7a917444-6099-42f4-b08c-2ba795dc78bd","Type":"ContainerDied","Data":"abb8367cef9d2cb4e1d97b30b90845ec2ab42446f3c7298d58daee16db7aa275"} Dec 12 00:38:00 crc kubenswrapper[4917]: I1212 00:38:00.516870 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-rbmhv" Dec 12 00:38:00 crc kubenswrapper[4917]: I1212 00:38:00.574842 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rbmhv"] Dec 12 00:38:00 crc kubenswrapper[4917]: I1212 00:38:00.581519 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rbmhv"] Dec 12 00:38:01 crc kubenswrapper[4917]: I1212 00:38:01.602281 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:38:01 crc kubenswrapper[4917]: E1212 00:38:01.602504 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:38:01 crc kubenswrapper[4917]: I1212 00:38:01.611792 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a917444-6099-42f4-b08c-2ba795dc78bd" path="/var/lib/kubelet/pods/7a917444-6099-42f4-b08c-2ba795dc78bd/volumes" Dec 12 00:38:12 crc kubenswrapper[4917]: I1212 00:38:12.602424 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:38:12 crc kubenswrapper[4917]: E1212 00:38:12.603257 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:38:15 crc kubenswrapper[4917]: I1212 00:38:15.635171 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-kpqxm" event={"ID":"d4e5e804-be59-4587-9a54-fbe0cd608604","Type":"ContainerStarted","Data":"539ff3855ff784e4ab6bfa8dab414524de4ba6ff1a83be44ddb6660a35903756"} Dec 12 00:38:15 crc kubenswrapper[4917]: I1212 00:38:15.659361 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-operators-kpqxm" podStartSLOduration=2.186635031 podStartE2EDuration="33.65933969s" podCreationTimestamp="2025-12-12 00:37:42 +0000 UTC" firstStartedPulling="2025-12-12 00:37:43.307339206 +0000 UTC m=+1898.085140019" lastFinishedPulling="2025-12-12 00:38:14.780043865 +0000 UTC m=+1929.557844678" observedRunningTime="2025-12-12 00:38:15.652838329 +0000 UTC m=+1930.430639142" watchObservedRunningTime="2025-12-12 00:38:15.65933969 +0000 UTC m=+1930.437140503" Dec 12 00:38:23 crc kubenswrapper[4917]: I1212 00:38:23.044830 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/service-telemetry-framework-operators-kpqxm" Dec 12 00:38:23 crc kubenswrapper[4917]: I1212 00:38:23.045233 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/service-telemetry-framework-operators-kpqxm" Dec 12 00:38:23 crc kubenswrapper[4917]: I1212 00:38:23.075768 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/service-telemetry-framework-operators-kpqxm" Dec 12 00:38:23 crc kubenswrapper[4917]: I1212 00:38:23.720298 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/service-telemetry-framework-operators-kpqxm" Dec 12 00:38:26 crc kubenswrapper[4917]: I1212 00:38:26.601477 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:38:26 crc kubenswrapper[4917]: E1212 00:38:26.602548 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:38:30 crc kubenswrapper[4917]: I1212 00:38:30.993329 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv"] Dec 12 00:38:30 crc kubenswrapper[4917]: I1212 00:38:30.995593 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.007345 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv"] Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.109930 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.110089 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5ncx\" (UniqueName: \"kubernetes.io/projected/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-kube-api-access-s5ncx\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.110242 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.212303 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.212431 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5ncx\" (UniqueName: \"kubernetes.io/projected/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-kube-api-access-s5ncx\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.212461 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.497423 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt"] Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.499298 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.517989 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt"] Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.618276 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.618390 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.618432 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft9nw\" (UniqueName: \"kubernetes.io/projected/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-kube-api-access-ft9nw\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.720380 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.720465 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft9nw\" (UniqueName: \"kubernetes.io/projected/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-kube-api-access-ft9nw\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:31 crc kubenswrapper[4917]: I1212 00:38:31.720599 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:33 crc kubenswrapper[4917]: E1212 00:38:33.175120 4917 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.574s" Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.175730 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.175730 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.175780 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.176323 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.184250 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft9nw\" (UniqueName: \"kubernetes.io/projected/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-kube-api-access-ft9nw\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.184878 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5ncx\" (UniqueName: \"kubernetes.io/projected/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-kube-api-access-s5ncx\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.476760 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.476836 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.710552 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv"] Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.754487 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt"] Dec 12 00:38:33 crc kubenswrapper[4917]: W1212 00:38:33.756458 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfa03ec5_fed6_4dc7_95c6_aadebfe7ee06.slice/crio-57d05e13e5b4abdaff59318b6192cbe3a37c896184bbfe8018d0da09639bdb42 WatchSource:0}: Error finding container 57d05e13e5b4abdaff59318b6192cbe3a37c896184bbfe8018d0da09639bdb42: Status 404 returned error can't find the container with id 57d05e13e5b4abdaff59318b6192cbe3a37c896184bbfe8018d0da09639bdb42 Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.774133 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" event={"ID":"1b423bc8-e23a-4582-9f0b-9f37e6cd5519","Type":"ContainerStarted","Data":"a74a7e42887eaeb3ee670d540490f8418473bc5859424b1df5e1ebb7128bada1"} Dec 12 00:38:33 crc kubenswrapper[4917]: I1212 00:38:33.775335 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" event={"ID":"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06","Type":"ContainerStarted","Data":"57d05e13e5b4abdaff59318b6192cbe3a37c896184bbfe8018d0da09639bdb42"} Dec 12 00:38:34 crc kubenswrapper[4917]: I1212 00:38:34.785839 4917 generic.go:334] "Generic (PLEG): container finished" podID="1b423bc8-e23a-4582-9f0b-9f37e6cd5519" containerID="d6acc4b6d36991a7b9cf8635146518b87c8736c3c471b537aae2a84add824eb9" exitCode=0 Dec 12 00:38:34 crc kubenswrapper[4917]: I1212 00:38:34.785930 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" event={"ID":"1b423bc8-e23a-4582-9f0b-9f37e6cd5519","Type":"ContainerDied","Data":"d6acc4b6d36991a7b9cf8635146518b87c8736c3c471b537aae2a84add824eb9"} Dec 12 00:38:34 crc kubenswrapper[4917]: I1212 00:38:34.789687 4917 generic.go:334] "Generic (PLEG): container finished" podID="cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" containerID="3decb0a3d9f5423e43f02f4e29dc797bbe9cb18c924d0836f0bc23b78a6d015e" exitCode=0 Dec 12 00:38:34 crc kubenswrapper[4917]: I1212 00:38:34.789743 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" event={"ID":"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06","Type":"ContainerDied","Data":"3decb0a3d9f5423e43f02f4e29dc797bbe9cb18c924d0836f0bc23b78a6d015e"} Dec 12 00:38:35 crc kubenswrapper[4917]: I1212 00:38:35.810631 4917 generic.go:334] "Generic (PLEG): container finished" podID="cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" containerID="2265db18cdfcec08a9835252f391e7e882458578a6a5acaa1dc8f237789bf378" exitCode=0 Dec 12 00:38:35 crc kubenswrapper[4917]: I1212 00:38:35.810781 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" event={"ID":"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06","Type":"ContainerDied","Data":"2265db18cdfcec08a9835252f391e7e882458578a6a5acaa1dc8f237789bf378"} Dec 12 00:38:35 crc kubenswrapper[4917]: I1212 00:38:35.814605 4917 generic.go:334] "Generic (PLEG): container finished" podID="1b423bc8-e23a-4582-9f0b-9f37e6cd5519" containerID="21b2984b10b2835349b6df520069b6ba874d8404cc2a2f633ee83eb5459598a6" exitCode=0 Dec 12 00:38:35 crc kubenswrapper[4917]: I1212 00:38:35.814707 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" event={"ID":"1b423bc8-e23a-4582-9f0b-9f37e6cd5519","Type":"ContainerDied","Data":"21b2984b10b2835349b6df520069b6ba874d8404cc2a2f633ee83eb5459598a6"} Dec 12 00:38:36 crc kubenswrapper[4917]: I1212 00:38:36.825264 4917 generic.go:334] "Generic (PLEG): container finished" podID="cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" containerID="5d60b39760207b2cfcc3a84c950a304c86fb6ee09b0b0c5aa5a9ed2fb2721f1d" exitCode=0 Dec 12 00:38:36 crc kubenswrapper[4917]: I1212 00:38:36.825395 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" event={"ID":"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06","Type":"ContainerDied","Data":"5d60b39760207b2cfcc3a84c950a304c86fb6ee09b0b0c5aa5a9ed2fb2721f1d"} Dec 12 00:38:36 crc kubenswrapper[4917]: I1212 00:38:36.828358 4917 generic.go:334] "Generic (PLEG): container finished" podID="1b423bc8-e23a-4582-9f0b-9f37e6cd5519" containerID="f0d4b6ef46c2370344b6e3564f62b0965871de3303447101f3f3e22b3e988b37" exitCode=0 Dec 12 00:38:36 crc kubenswrapper[4917]: I1212 00:38:36.828427 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" event={"ID":"1b423bc8-e23a-4582-9f0b-9f37e6cd5519","Type":"ContainerDied","Data":"f0d4b6ef46c2370344b6e3564f62b0965871de3303447101f3f3e22b3e988b37"} Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.161989 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.169304 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.225319 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5ncx\" (UniqueName: \"kubernetes.io/projected/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-kube-api-access-s5ncx\") pod \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.225415 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-bundle\") pod \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.225451 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-util\") pod \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\" (UID: \"1b423bc8-e23a-4582-9f0b-9f37e6cd5519\") " Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.225491 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-bundle\") pod \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.225513 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft9nw\" (UniqueName: \"kubernetes.io/projected/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-kube-api-access-ft9nw\") pod \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.225689 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-util\") pod \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\" (UID: \"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06\") " Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.236489 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-bundle" (OuterVolumeSpecName: "bundle") pod "cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" (UID: "cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.238097 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-bundle" (OuterVolumeSpecName: "bundle") pod "1b423bc8-e23a-4582-9f0b-9f37e6cd5519" (UID: "1b423bc8-e23a-4582-9f0b-9f37e6cd5519"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.242553 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-util" (OuterVolumeSpecName: "util") pod "cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" (UID: "cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.244284 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-kube-api-access-ft9nw" (OuterVolumeSpecName: "kube-api-access-ft9nw") pod "cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" (UID: "cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06"). InnerVolumeSpecName "kube-api-access-ft9nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.254550 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-kube-api-access-s5ncx" (OuterVolumeSpecName: "kube-api-access-s5ncx") pod "1b423bc8-e23a-4582-9f0b-9f37e6cd5519" (UID: "1b423bc8-e23a-4582-9f0b-9f37e6cd5519"). InnerVolumeSpecName "kube-api-access-s5ncx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.264819 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-util" (OuterVolumeSpecName: "util") pod "1b423bc8-e23a-4582-9f0b-9f37e6cd5519" (UID: "1b423bc8-e23a-4582-9f0b-9f37e6cd5519"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.328301 4917 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.328337 4917 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-util\") on node \"crc\" DevicePath \"\"" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.328345 4917 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.328355 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft9nw\" (UniqueName: \"kubernetes.io/projected/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-kube-api-access-ft9nw\") on node \"crc\" DevicePath \"\"" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.328366 4917 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06-util\") on node \"crc\" DevicePath \"\"" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.328376 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5ncx\" (UniqueName: \"kubernetes.io/projected/1b423bc8-e23a-4582-9f0b-9f37e6cd5519-kube-api-access-s5ncx\") on node \"crc\" DevicePath \"\"" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.844123 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" event={"ID":"cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06","Type":"ContainerDied","Data":"57d05e13e5b4abdaff59318b6192cbe3a37c896184bbfe8018d0da09639bdb42"} Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.844208 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57d05e13e5b4abdaff59318b6192cbe3a37c896184bbfe8018d0da09639bdb42" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.844180 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c098p7jt" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.846928 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" event={"ID":"1b423bc8-e23a-4582-9f0b-9f37e6cd5519","Type":"ContainerDied","Data":"a74a7e42887eaeb3ee670d540490f8418473bc5859424b1df5e1ebb7128bada1"} Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.846971 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74a7e42887eaeb3ee670d540490f8418473bc5859424b1df5e1ebb7128bada1" Dec 12 00:38:38 crc kubenswrapper[4917]: I1212 00:38:38.847077 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a59xnv" Dec 12 00:38:41 crc kubenswrapper[4917]: I1212 00:38:41.602293 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:38:41 crc kubenswrapper[4917]: I1212 00:38:41.868791 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"9ae58f5b179e6c967cab3b964e831e89319949df0b3e0b977568f62d47cf198d"} Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.771897 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-68478559c6-r5bf2"] Dec 12 00:38:44 crc kubenswrapper[4917]: E1212 00:38:44.773125 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b423bc8-e23a-4582-9f0b-9f37e6cd5519" containerName="util" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.773140 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b423bc8-e23a-4582-9f0b-9f37e6cd5519" containerName="util" Dec 12 00:38:44 crc kubenswrapper[4917]: E1212 00:38:44.773165 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b423bc8-e23a-4582-9f0b-9f37e6cd5519" containerName="pull" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.773171 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b423bc8-e23a-4582-9f0b-9f37e6cd5519" containerName="pull" Dec 12 00:38:44 crc kubenswrapper[4917]: E1212 00:38:44.773184 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" containerName="extract" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.773192 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" containerName="extract" Dec 12 00:38:44 crc kubenswrapper[4917]: E1212 00:38:44.773205 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" containerName="pull" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.773211 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" containerName="pull" Dec 12 00:38:44 crc kubenswrapper[4917]: E1212 00:38:44.773219 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b423bc8-e23a-4582-9f0b-9f37e6cd5519" containerName="extract" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.773225 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b423bc8-e23a-4582-9f0b-9f37e6cd5519" containerName="extract" Dec 12 00:38:44 crc kubenswrapper[4917]: E1212 00:38:44.773237 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" containerName="util" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.773243 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" containerName="util" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.773366 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa03ec5-fed6-4dc7-95c6-aadebfe7ee06" containerName="extract" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.773379 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b423bc8-e23a-4582-9f0b-9f37e6cd5519" containerName="extract" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.773976 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.776344 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-lst6d" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.804723 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-68478559c6-r5bf2"] Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.820787 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/139ada12-b9b2-4a1b-acfa-2bf1b51fb12c-runner\") pod \"service-telemetry-operator-68478559c6-r5bf2\" (UID: \"139ada12-b9b2-4a1b-acfa-2bf1b51fb12c\") " pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.820873 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25gg8\" (UniqueName: \"kubernetes.io/projected/139ada12-b9b2-4a1b-acfa-2bf1b51fb12c-kube-api-access-25gg8\") pod \"service-telemetry-operator-68478559c6-r5bf2\" (UID: \"139ada12-b9b2-4a1b-acfa-2bf1b51fb12c\") " pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.922152 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/139ada12-b9b2-4a1b-acfa-2bf1b51fb12c-runner\") pod \"service-telemetry-operator-68478559c6-r5bf2\" (UID: \"139ada12-b9b2-4a1b-acfa-2bf1b51fb12c\") " pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.922245 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25gg8\" (UniqueName: \"kubernetes.io/projected/139ada12-b9b2-4a1b-acfa-2bf1b51fb12c-kube-api-access-25gg8\") pod \"service-telemetry-operator-68478559c6-r5bf2\" (UID: \"139ada12-b9b2-4a1b-acfa-2bf1b51fb12c\") " pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.922690 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/139ada12-b9b2-4a1b-acfa-2bf1b51fb12c-runner\") pod \"service-telemetry-operator-68478559c6-r5bf2\" (UID: \"139ada12-b9b2-4a1b-acfa-2bf1b51fb12c\") " pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" Dec 12 00:38:44 crc kubenswrapper[4917]: I1212 00:38:44.946826 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25gg8\" (UniqueName: \"kubernetes.io/projected/139ada12-b9b2-4a1b-acfa-2bf1b51fb12c-kube-api-access-25gg8\") pod \"service-telemetry-operator-68478559c6-r5bf2\" (UID: \"139ada12-b9b2-4a1b-acfa-2bf1b51fb12c\") " pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" Dec 12 00:38:45 crc kubenswrapper[4917]: I1212 00:38:45.094622 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" Dec 12 00:38:45 crc kubenswrapper[4917]: I1212 00:38:45.309881 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-68478559c6-r5bf2"] Dec 12 00:38:45 crc kubenswrapper[4917]: I1212 00:38:45.895206 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" event={"ID":"139ada12-b9b2-4a1b-acfa-2bf1b51fb12c","Type":"ContainerStarted","Data":"bccc8c7fea5e5e13c8a8dc5169912f48df55d18406daf97d6be20ec15f699f52"} Dec 12 00:38:47 crc kubenswrapper[4917]: I1212 00:38:47.529371 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg"] Dec 12 00:38:47 crc kubenswrapper[4917]: I1212 00:38:47.530732 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg" Dec 12 00:38:47 crc kubenswrapper[4917]: I1212 00:38:47.540395 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-fdj44" Dec 12 00:38:47 crc kubenswrapper[4917]: I1212 00:38:47.574253 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg"] Dec 12 00:38:47 crc kubenswrapper[4917]: I1212 00:38:47.660904 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nfv9\" (UniqueName: \"kubernetes.io/projected/fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6-kube-api-access-9nfv9\") pod \"smart-gateway-operator-5bdd75688b-nlpwg\" (UID: \"fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6\") " pod="service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg" Dec 12 00:38:47 crc kubenswrapper[4917]: I1212 00:38:47.661073 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6-runner\") pod \"smart-gateway-operator-5bdd75688b-nlpwg\" (UID: \"fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6\") " pod="service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg" Dec 12 00:38:47 crc kubenswrapper[4917]: I1212 00:38:47.764878 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nfv9\" (UniqueName: \"kubernetes.io/projected/fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6-kube-api-access-9nfv9\") pod \"smart-gateway-operator-5bdd75688b-nlpwg\" (UID: \"fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6\") " pod="service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg" Dec 12 00:38:47 crc kubenswrapper[4917]: I1212 00:38:47.765002 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6-runner\") pod \"smart-gateway-operator-5bdd75688b-nlpwg\" (UID: \"fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6\") " pod="service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg" Dec 12 00:38:47 crc kubenswrapper[4917]: I1212 00:38:47.765768 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6-runner\") pod \"smart-gateway-operator-5bdd75688b-nlpwg\" (UID: \"fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6\") " pod="service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg" Dec 12 00:38:47 crc kubenswrapper[4917]: I1212 00:38:47.818785 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nfv9\" (UniqueName: \"kubernetes.io/projected/fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6-kube-api-access-9nfv9\") pod \"smart-gateway-operator-5bdd75688b-nlpwg\" (UID: \"fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6\") " pod="service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg" Dec 12 00:38:47 crc kubenswrapper[4917]: I1212 00:38:47.846847 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg" Dec 12 00:38:48 crc kubenswrapper[4917]: I1212 00:38:48.165708 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg"] Dec 12 00:38:48 crc kubenswrapper[4917]: I1212 00:38:48.930933 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg" event={"ID":"fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6","Type":"ContainerStarted","Data":"6d9649fd7832dba5bc26258948531785a51ce431f31e722e751f8a2bc51270dc"} Dec 12 00:39:03 crc kubenswrapper[4917]: E1212 00:39:03.126510 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/service-telemetry-operator:stable-1.5" Dec 12 00:39:03 crc kubenswrapper[4917]: E1212 00:39:03.128170 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/service-telemetry-operator:stable-1.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:service-telemetry-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_WEBHOOK_SNMP_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/prometheus-webhook-snmp:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_IMAGE,Value:quay.io/prometheus/prometheus:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER_IMAGE,Value:quay.io/prometheus/alertmanager:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:service-telemetry-operator.v1.5.1765499738,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25gg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-68478559c6-r5bf2_service-telemetry(139ada12-b9b2-4a1b-acfa-2bf1b51fb12c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 00:39:03 crc kubenswrapper[4917]: E1212 00:39:03.133341 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" podUID="139ada12-b9b2-4a1b-acfa-2bf1b51fb12c" Dec 12 00:39:03 crc kubenswrapper[4917]: E1212 00:39:03.194473 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/service-telemetry-operator:stable-1.5\\\"\"" pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" podUID="139ada12-b9b2-4a1b-acfa-2bf1b51fb12c" Dec 12 00:39:13 crc kubenswrapper[4917]: I1212 00:39:13.261523 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg" event={"ID":"fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6","Type":"ContainerStarted","Data":"84e401466fb110ce5fc1b8184e43374b96ea7ff4f0c197b09fd62c414a628afe"} Dec 12 00:39:13 crc kubenswrapper[4917]: I1212 00:39:13.285059 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-5bdd75688b-nlpwg" podStartSLOduration=1.7052043810000002 podStartE2EDuration="26.285037798s" podCreationTimestamp="2025-12-12 00:38:47 +0000 UTC" firstStartedPulling="2025-12-12 00:38:48.167114358 +0000 UTC m=+1962.944915171" lastFinishedPulling="2025-12-12 00:39:12.746947775 +0000 UTC m=+1987.524748588" observedRunningTime="2025-12-12 00:39:13.279101881 +0000 UTC m=+1988.056902704" watchObservedRunningTime="2025-12-12 00:39:13.285037798 +0000 UTC m=+1988.062838611" Dec 12 00:39:18 crc kubenswrapper[4917]: I1212 00:39:18.308032 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" event={"ID":"139ada12-b9b2-4a1b-acfa-2bf1b51fb12c","Type":"ContainerStarted","Data":"39a81628c79033dd32b6a9f7d517524a525dd3a107cca02f63cfb3eab051e350"} Dec 12 00:39:18 crc kubenswrapper[4917]: I1212 00:39:18.336282 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-68478559c6-r5bf2" podStartSLOduration=1.562944901 podStartE2EDuration="34.336236622s" podCreationTimestamp="2025-12-12 00:38:44 +0000 UTC" firstStartedPulling="2025-12-12 00:38:45.323459875 +0000 UTC m=+1960.101260688" lastFinishedPulling="2025-12-12 00:39:18.096751596 +0000 UTC m=+1992.874552409" observedRunningTime="2025-12-12 00:39:18.325684154 +0000 UTC m=+1993.103484977" watchObservedRunningTime="2025-12-12 00:39:18.336236622 +0000 UTC m=+1993.114037445" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.363330 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t8tz6"] Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.364973 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.368299 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.368910 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.369045 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.369102 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.369285 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.369299 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.371393 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-j9j5g" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.386856 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t8tz6"] Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.439429 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.439499 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-users\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.439536 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.439563 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdjzw\" (UniqueName: \"kubernetes.io/projected/ddf6e524-5c84-416a-a180-7e8ecffa312b-kube-api-access-hdjzw\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.439602 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.439639 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-config\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.439687 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.541418 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.541493 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.541523 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-users\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.541563 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.541599 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdjzw\" (UniqueName: \"kubernetes.io/projected/ddf6e524-5c84-416a-a180-7e8ecffa312b-kube-api-access-hdjzw\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.541642 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.541705 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-config\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.549832 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-users\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.550035 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.551248 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-config\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.551300 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.552194 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.557723 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.562560 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdjzw\" (UniqueName: \"kubernetes.io/projected/ddf6e524-5c84-416a-a180-7e8ecffa312b-kube-api-access-hdjzw\") pod \"default-interconnect-68864d46cb-t8tz6\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.722431 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:39:43 crc kubenswrapper[4917]: I1212 00:39:43.981996 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t8tz6"] Dec 12 00:39:44 crc kubenswrapper[4917]: I1212 00:39:44.486436 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" event={"ID":"ddf6e524-5c84-416a-a180-7e8ecffa312b","Type":"ContainerStarted","Data":"f6b16eecafc6a7f35069801f4e7d2e459701e0d985d2734154e91065e4959715"} Dec 12 00:39:50 crc kubenswrapper[4917]: I1212 00:39:50.529026 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" event={"ID":"ddf6e524-5c84-416a-a180-7e8ecffa312b","Type":"ContainerStarted","Data":"9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575"} Dec 12 00:39:50 crc kubenswrapper[4917]: I1212 00:39:50.554618 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" podStartSLOduration=1.33680156 podStartE2EDuration="7.554588308s" podCreationTimestamp="2025-12-12 00:39:43 +0000 UTC" firstStartedPulling="2025-12-12 00:39:43.989079645 +0000 UTC m=+2018.766880458" lastFinishedPulling="2025-12-12 00:39:50.206866383 +0000 UTC m=+2024.984667206" observedRunningTime="2025-12-12 00:39:50.550757437 +0000 UTC m=+2025.328558270" watchObservedRunningTime="2025-12-12 00:39:50.554588308 +0000 UTC m=+2025.332389121" Dec 12 00:39:56 crc kubenswrapper[4917]: I1212 00:39:56.974600 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 00:39:56 crc kubenswrapper[4917]: I1212 00:39:56.976762 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 12 00:39:56 crc kubenswrapper[4917]: I1212 00:39:56.979308 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Dec 12 00:39:56 crc kubenswrapper[4917]: I1212 00:39:56.979332 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Dec 12 00:39:56 crc kubenswrapper[4917]: I1212 00:39:56.979417 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Dec 12 00:39:56 crc kubenswrapper[4917]: I1212 00:39:56.979455 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Dec 12 00:39:56 crc kubenswrapper[4917]: I1212 00:39:56.979582 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Dec 12 00:39:56 crc kubenswrapper[4917]: I1212 00:39:56.980296 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Dec 12 00:39:56 crc kubenswrapper[4917]: I1212 00:39:56.980668 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Dec 12 00:39:56 crc kubenswrapper[4917]: I1212 00:39:56.980984 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-cg4gb" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.008876 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.043435 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlvfz\" (UniqueName: \"kubernetes.io/projected/460a63ae-ba23-445f-afd3-c9dde4d8a411-kube-api-access-nlvfz\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.043497 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/460a63ae-ba23-445f-afd3-c9dde4d8a411-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.043545 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/460a63ae-ba23-445f-afd3-c9dde4d8a411-config-out\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.043573 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-web-config\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.043665 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.043716 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e4d63990-6cc7-45c3-90ff-add2e7149a75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e4d63990-6cc7-45c3-90ff-add2e7149a75\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.043750 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-config\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.043791 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.043809 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/460a63ae-ba23-445f-afd3-c9dde4d8a411-tls-assets\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.043830 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/460a63ae-ba23-445f-afd3-c9dde4d8a411-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.145373 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/460a63ae-ba23-445f-afd3-c9dde4d8a411-config-out\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.146463 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-web-config\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.146507 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.146545 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e4d63990-6cc7-45c3-90ff-add2e7149a75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e4d63990-6cc7-45c3-90ff-add2e7149a75\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.146583 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-config\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.146637 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.146677 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/460a63ae-ba23-445f-afd3-c9dde4d8a411-tls-assets\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.146712 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/460a63ae-ba23-445f-afd3-c9dde4d8a411-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.146733 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlvfz\" (UniqueName: \"kubernetes.io/projected/460a63ae-ba23-445f-afd3-c9dde4d8a411-kube-api-access-nlvfz\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.146759 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/460a63ae-ba23-445f-afd3-c9dde4d8a411-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: E1212 00:39:57.147312 4917 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 12 00:39:57 crc kubenswrapper[4917]: E1212 00:39:57.147432 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-prometheus-proxy-tls podName:460a63ae-ba23-445f-afd3-c9dde4d8a411 nodeName:}" failed. No retries permitted until 2025-12-12 00:39:57.647393982 +0000 UTC m=+2032.425194825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "460a63ae-ba23-445f-afd3-c9dde4d8a411") : secret "default-prometheus-proxy-tls" not found Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.147575 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/460a63ae-ba23-445f-afd3-c9dde4d8a411-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.150551 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/460a63ae-ba23-445f-afd3-c9dde4d8a411-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.151958 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-web-config\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.152392 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/460a63ae-ba23-445f-afd3-c9dde4d8a411-config-out\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.154094 4917 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.154148 4917 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e4d63990-6cc7-45c3-90ff-add2e7149a75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e4d63990-6cc7-45c3-90ff-add2e7149a75\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46c61266ed32c9b54d7dbf6d5cd0bd21d90fa24e338b407a2957c9658ba92cc6/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.162938 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-config\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.164446 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.165040 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/460a63ae-ba23-445f-afd3-c9dde4d8a411-tls-assets\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.168750 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlvfz\" (UniqueName: \"kubernetes.io/projected/460a63ae-ba23-445f-afd3-c9dde4d8a411-kube-api-access-nlvfz\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.185914 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e4d63990-6cc7-45c3-90ff-add2e7149a75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e4d63990-6cc7-45c3-90ff-add2e7149a75\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: I1212 00:39:57.654720 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:57 crc kubenswrapper[4917]: E1212 00:39:57.655266 4917 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 12 00:39:57 crc kubenswrapper[4917]: E1212 00:39:57.655324 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-prometheus-proxy-tls podName:460a63ae-ba23-445f-afd3-c9dde4d8a411 nodeName:}" failed. No retries permitted until 2025-12-12 00:39:58.655309808 +0000 UTC m=+2033.433110621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "460a63ae-ba23-445f-afd3-c9dde4d8a411") : secret "default-prometheus-proxy-tls" not found Dec 12 00:39:58 crc kubenswrapper[4917]: I1212 00:39:58.670275 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:58 crc kubenswrapper[4917]: I1212 00:39:58.674504 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/460a63ae-ba23-445f-afd3-c9dde4d8a411-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"460a63ae-ba23-445f-afd3-c9dde4d8a411\") " pod="service-telemetry/prometheus-default-0" Dec 12 00:39:58 crc kubenswrapper[4917]: I1212 00:39:58.794703 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 12 00:39:59 crc kubenswrapper[4917]: I1212 00:39:59.052222 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 00:39:59 crc kubenswrapper[4917]: W1212 00:39:59.063037 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod460a63ae_ba23_445f_afd3_c9dde4d8a411.slice/crio-7586aee439a54e1b89badffef79c6f8b14409a24d58a373905631cc0d1c8ccd8 WatchSource:0}: Error finding container 7586aee439a54e1b89badffef79c6f8b14409a24d58a373905631cc0d1c8ccd8: Status 404 returned error can't find the container with id 7586aee439a54e1b89badffef79c6f8b14409a24d58a373905631cc0d1c8ccd8 Dec 12 00:39:59 crc kubenswrapper[4917]: I1212 00:39:59.613735 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"460a63ae-ba23-445f-afd3-c9dde4d8a411","Type":"ContainerStarted","Data":"7586aee439a54e1b89badffef79c6f8b14409a24d58a373905631cc0d1c8ccd8"} Dec 12 00:40:04 crc kubenswrapper[4917]: I1212 00:40:04.669823 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"460a63ae-ba23-445f-afd3-c9dde4d8a411","Type":"ContainerStarted","Data":"d015a32181cd0b729bed148918fa182ee9bf3d4f453d13679fae56ceae45ee03"} Dec 12 00:40:07 crc kubenswrapper[4917]: I1212 00:40:07.898785 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-pschj"] Dec 12 00:40:07 crc kubenswrapper[4917]: I1212 00:40:07.912140 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-pschj" Dec 12 00:40:07 crc kubenswrapper[4917]: I1212 00:40:07.911427 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-pschj"] Dec 12 00:40:08 crc kubenswrapper[4917]: I1212 00:40:08.015848 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s942g\" (UniqueName: \"kubernetes.io/projected/58a31d61-50bc-4a00-9040-7ece16fa7c9d-kube-api-access-s942g\") pod \"default-snmp-webhook-6856cfb745-pschj\" (UID: \"58a31d61-50bc-4a00-9040-7ece16fa7c9d\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-pschj" Dec 12 00:40:08 crc kubenswrapper[4917]: I1212 00:40:08.117608 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s942g\" (UniqueName: \"kubernetes.io/projected/58a31d61-50bc-4a00-9040-7ece16fa7c9d-kube-api-access-s942g\") pod \"default-snmp-webhook-6856cfb745-pschj\" (UID: \"58a31d61-50bc-4a00-9040-7ece16fa7c9d\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-pschj" Dec 12 00:40:08 crc kubenswrapper[4917]: I1212 00:40:08.134989 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s942g\" (UniqueName: \"kubernetes.io/projected/58a31d61-50bc-4a00-9040-7ece16fa7c9d-kube-api-access-s942g\") pod \"default-snmp-webhook-6856cfb745-pschj\" (UID: \"58a31d61-50bc-4a00-9040-7ece16fa7c9d\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-pschj" Dec 12 00:40:08 crc kubenswrapper[4917]: I1212 00:40:08.244843 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-pschj" Dec 12 00:40:08 crc kubenswrapper[4917]: I1212 00:40:08.692494 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-pschj"] Dec 12 00:40:08 crc kubenswrapper[4917]: I1212 00:40:08.714443 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-pschj" event={"ID":"58a31d61-50bc-4a00-9040-7ece16fa7c9d","Type":"ContainerStarted","Data":"10b23af40f8b0edd6e8c9effd549e68f5acf008304f37260500dc0ab7ce8b646"} Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.319856 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.321924 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.324808 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.324873 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-c74x6" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.325192 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.325234 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.325205 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.326738 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.340274 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.392347 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.392445 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twhth\" (UniqueName: \"kubernetes.io/projected/337f1b5b-cd54-4c3e-98ef-2c29e8019998-kube-api-access-twhth\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.392500 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-config-volume\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.392538 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.392569 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-web-config\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.392590 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/337f1b5b-cd54-4c3e-98ef-2c29e8019998-config-out\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.392622 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.392674 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-91fd9b64-b445-4ba1-95ec-7a675f4d49c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91fd9b64-b445-4ba1-95ec-7a675f4d49c1\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.392712 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/337f1b5b-cd54-4c3e-98ef-2c29e8019998-tls-assets\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.494523 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.494883 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-91fd9b64-b445-4ba1-95ec-7a675f4d49c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91fd9b64-b445-4ba1-95ec-7a675f4d49c1\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.494944 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/337f1b5b-cd54-4c3e-98ef-2c29e8019998-tls-assets\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.494982 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: E1212 00:40:11.494776 4917 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 00:40:11 crc kubenswrapper[4917]: E1212 00:40:11.495309 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls podName:337f1b5b-cd54-4c3e-98ef-2c29e8019998 nodeName:}" failed. No retries permitted until 2025-12-12 00:40:11.995274721 +0000 UTC m=+2046.773075534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "337f1b5b-cd54-4c3e-98ef-2c29e8019998") : secret "default-alertmanager-proxy-tls" not found Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.498087 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twhth\" (UniqueName: \"kubernetes.io/projected/337f1b5b-cd54-4c3e-98ef-2c29e8019998-kube-api-access-twhth\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.498199 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-config-volume\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.498270 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.498314 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-web-config\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.498359 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/337f1b5b-cd54-4c3e-98ef-2c29e8019998-config-out\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.500887 4917 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.500940 4917 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-91fd9b64-b445-4ba1-95ec-7a675f4d49c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91fd9b64-b445-4ba1-95ec-7a675f4d49c1\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f133e1bae192ac8e4322e4b60fb68d9643421e01415b1fe6ed3bf7224d7bb991/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.508655 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/337f1b5b-cd54-4c3e-98ef-2c29e8019998-tls-assets\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.508949 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.513527 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.514122 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-web-config\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.514422 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/337f1b5b-cd54-4c3e-98ef-2c29e8019998-config-out\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.514689 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twhth\" (UniqueName: \"kubernetes.io/projected/337f1b5b-cd54-4c3e-98ef-2c29e8019998-kube-api-access-twhth\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.521355 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-config-volume\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:11 crc kubenswrapper[4917]: I1212 00:40:11.550011 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-91fd9b64-b445-4ba1-95ec-7a675f4d49c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91fd9b64-b445-4ba1-95ec-7a675f4d49c1\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:12 crc kubenswrapper[4917]: I1212 00:40:12.005947 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:12 crc kubenswrapper[4917]: E1212 00:40:12.006131 4917 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 00:40:12 crc kubenswrapper[4917]: E1212 00:40:12.006212 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls podName:337f1b5b-cd54-4c3e-98ef-2c29e8019998 nodeName:}" failed. No retries permitted until 2025-12-12 00:40:13.006193146 +0000 UTC m=+2047.783993959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "337f1b5b-cd54-4c3e-98ef-2c29e8019998") : secret "default-alertmanager-proxy-tls" not found Dec 12 00:40:12 crc kubenswrapper[4917]: I1212 00:40:12.744449 4917 generic.go:334] "Generic (PLEG): container finished" podID="460a63ae-ba23-445f-afd3-c9dde4d8a411" containerID="d015a32181cd0b729bed148918fa182ee9bf3d4f453d13679fae56ceae45ee03" exitCode=0 Dec 12 00:40:12 crc kubenswrapper[4917]: I1212 00:40:12.744756 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"460a63ae-ba23-445f-afd3-c9dde4d8a411","Type":"ContainerDied","Data":"d015a32181cd0b729bed148918fa182ee9bf3d4f453d13679fae56ceae45ee03"} Dec 12 00:40:13 crc kubenswrapper[4917]: I1212 00:40:13.024298 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:13 crc kubenswrapper[4917]: E1212 00:40:13.024572 4917 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 00:40:13 crc kubenswrapper[4917]: E1212 00:40:13.024766 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls podName:337f1b5b-cd54-4c3e-98ef-2c29e8019998 nodeName:}" failed. No retries permitted until 2025-12-12 00:40:15.02471952 +0000 UTC m=+2049.802520333 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "337f1b5b-cd54-4c3e-98ef-2c29e8019998") : secret "default-alertmanager-proxy-tls" not found Dec 12 00:40:15 crc kubenswrapper[4917]: I1212 00:40:15.055515 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:15 crc kubenswrapper[4917]: E1212 00:40:15.055751 4917 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 00:40:15 crc kubenswrapper[4917]: E1212 00:40:15.056040 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls podName:337f1b5b-cd54-4c3e-98ef-2c29e8019998 nodeName:}" failed. No retries permitted until 2025-12-12 00:40:19.056021935 +0000 UTC m=+2053.833822748 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "337f1b5b-cd54-4c3e-98ef-2c29e8019998") : secret "default-alertmanager-proxy-tls" not found Dec 12 00:40:15 crc kubenswrapper[4917]: I1212 00:40:15.773985 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-pschj" event={"ID":"58a31d61-50bc-4a00-9040-7ece16fa7c9d","Type":"ContainerStarted","Data":"1ccd49a222c61338ea270bef425a794f050dc2274a9234d03e52c480ed17c792"} Dec 12 00:40:15 crc kubenswrapper[4917]: I1212 00:40:15.811959 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-pschj" podStartSLOduration=2.360451222 podStartE2EDuration="8.811935983s" podCreationTimestamp="2025-12-12 00:40:07 +0000 UTC" firstStartedPulling="2025-12-12 00:40:08.708433199 +0000 UTC m=+2043.486234012" lastFinishedPulling="2025-12-12 00:40:15.15991796 +0000 UTC m=+2049.937718773" observedRunningTime="2025-12-12 00:40:15.78988164 +0000 UTC m=+2050.567682453" watchObservedRunningTime="2025-12-12 00:40:15.811935983 +0000 UTC m=+2050.589736796" Dec 12 00:40:19 crc kubenswrapper[4917]: I1212 00:40:19.131151 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:19 crc kubenswrapper[4917]: I1212 00:40:19.139269 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/337f1b5b-cd54-4c3e-98ef-2c29e8019998-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"337f1b5b-cd54-4c3e-98ef-2c29e8019998\") " pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:19 crc kubenswrapper[4917]: I1212 00:40:19.157021 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 12 00:40:20 crc kubenswrapper[4917]: I1212 00:40:20.930697 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 00:40:21 crc kubenswrapper[4917]: I1212 00:40:21.823723 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"460a63ae-ba23-445f-afd3-c9dde4d8a411","Type":"ContainerStarted","Data":"0223c1acea20b3e32a46862bc553e9a24cbb45a1b048ef7229ff7a2bb529d54b"} Dec 12 00:40:21 crc kubenswrapper[4917]: I1212 00:40:21.825081 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"337f1b5b-cd54-4c3e-98ef-2c29e8019998","Type":"ContainerStarted","Data":"c7537df26ecdcd095b91fdf04d7a6d41afa5be208edc6c02fe01ca84acb1092e"} Dec 12 00:40:23 crc kubenswrapper[4917]: I1212 00:40:23.842905 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"337f1b5b-cd54-4c3e-98ef-2c29e8019998","Type":"ContainerStarted","Data":"3a44e5636c4c306942affeaece444bdaffdc7f1b627971aa69dc8aa5cf7d7fc3"} Dec 12 00:40:23 crc kubenswrapper[4917]: I1212 00:40:23.851201 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"460a63ae-ba23-445f-afd3-c9dde4d8a411","Type":"ContainerStarted","Data":"decae63c2ca7421ed81f4c7f622827403f6b856f1e2514fcdeab4fda80a59c72"} Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.031712 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74"] Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.033613 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.036055 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.036320 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-g8j4c" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.036560 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.039189 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.053562 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74"] Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.092364 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhh4\" (UniqueName: \"kubernetes.io/projected/1a020119-1f66-4f57-be67-c2a2b91afda1-kube-api-access-2mhh4\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.092426 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.092458 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1a020119-1f66-4f57-be67-c2a2b91afda1-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.092475 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1a020119-1f66-4f57-be67-c2a2b91afda1-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.092626 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.194368 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mhh4\" (UniqueName: \"kubernetes.io/projected/1a020119-1f66-4f57-be67-c2a2b91afda1-kube-api-access-2mhh4\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.194432 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.194474 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1a020119-1f66-4f57-be67-c2a2b91afda1-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.194500 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1a020119-1f66-4f57-be67-c2a2b91afda1-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.194531 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: E1212 00:40:28.194693 4917 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 00:40:28 crc kubenswrapper[4917]: E1212 00:40:28.194771 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-default-cloud1-coll-meter-proxy-tls podName:1a020119-1f66-4f57-be67-c2a2b91afda1 nodeName:}" failed. No retries permitted until 2025-12-12 00:40:28.694747826 +0000 UTC m=+2063.472548639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" (UID: "1a020119-1f66-4f57-be67-c2a2b91afda1") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.195095 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1a020119-1f66-4f57-be67-c2a2b91afda1-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.195535 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1a020119-1f66-4f57-be67-c2a2b91afda1-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.206752 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.213953 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mhh4\" (UniqueName: \"kubernetes.io/projected/1a020119-1f66-4f57-be67-c2a2b91afda1-kube-api-access-2mhh4\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: I1212 00:40:28.702831 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:28 crc kubenswrapper[4917]: E1212 00:40:28.704188 4917 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 00:40:28 crc kubenswrapper[4917]: E1212 00:40:28.704253 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-default-cloud1-coll-meter-proxy-tls podName:1a020119-1f66-4f57-be67-c2a2b91afda1 nodeName:}" failed. No retries permitted until 2025-12-12 00:40:29.704234844 +0000 UTC m=+2064.482035657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" (UID: "1a020119-1f66-4f57-be67-c2a2b91afda1") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 00:40:29 crc kubenswrapper[4917]: I1212 00:40:29.719953 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:29 crc kubenswrapper[4917]: I1212 00:40:29.729930 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a020119-1f66-4f57-be67-c2a2b91afda1-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74\" (UID: \"1a020119-1f66-4f57-be67-c2a2b91afda1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:29 crc kubenswrapper[4917]: I1212 00:40:29.872521 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.465820 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v"] Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.467897 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.471042 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.471275 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.490913 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v"] Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.509381 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74"] Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.532934 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/339269b5-9c82-4a6c-83c8-02f76531493c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.533002 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.533038 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/339269b5-9c82-4a6c-83c8-02f76531493c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.533124 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.533156 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssj49\" (UniqueName: \"kubernetes.io/projected/339269b5-9c82-4a6c-83c8-02f76531493c-kube-api-access-ssj49\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.634571 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssj49\" (UniqueName: \"kubernetes.io/projected/339269b5-9c82-4a6c-83c8-02f76531493c-kube-api-access-ssj49\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.635026 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/339269b5-9c82-4a6c-83c8-02f76531493c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.635059 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.635091 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/339269b5-9c82-4a6c-83c8-02f76531493c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.635160 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: E1212 00:40:30.635344 4917 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 00:40:30 crc kubenswrapper[4917]: E1212 00:40:30.635437 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls podName:339269b5-9c82-4a6c-83c8-02f76531493c nodeName:}" failed. No retries permitted until 2025-12-12 00:40:31.135411334 +0000 UTC m=+2065.913212157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" (UID: "339269b5-9c82-4a6c-83c8-02f76531493c") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.636216 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/339269b5-9c82-4a6c-83c8-02f76531493c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.636809 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/339269b5-9c82-4a6c-83c8-02f76531493c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.644611 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.652141 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssj49\" (UniqueName: \"kubernetes.io/projected/339269b5-9c82-4a6c-83c8-02f76531493c-kube-api-access-ssj49\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.913373 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" event={"ID":"1a020119-1f66-4f57-be67-c2a2b91afda1","Type":"ContainerStarted","Data":"01c15093d47a43872732f50961cf32d3ee3d41777d2549ef07bfdf6a127dfdd4"} Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.919054 4917 generic.go:334] "Generic (PLEG): container finished" podID="337f1b5b-cd54-4c3e-98ef-2c29e8019998" containerID="3a44e5636c4c306942affeaece444bdaffdc7f1b627971aa69dc8aa5cf7d7fc3" exitCode=0 Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.919116 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"337f1b5b-cd54-4c3e-98ef-2c29e8019998","Type":"ContainerDied","Data":"3a44e5636c4c306942affeaece444bdaffdc7f1b627971aa69dc8aa5cf7d7fc3"} Dec 12 00:40:30 crc kubenswrapper[4917]: I1212 00:40:30.922489 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"460a63ae-ba23-445f-afd3-c9dde4d8a411","Type":"ContainerStarted","Data":"5d956dd1ac087cd9487918adbb1840d0c5fcb3ea2e3217c13e0e44385e1c3030"} Dec 12 00:40:31 crc kubenswrapper[4917]: I1212 00:40:31.143298 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:31 crc kubenswrapper[4917]: E1212 00:40:31.143537 4917 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 00:40:31 crc kubenswrapper[4917]: E1212 00:40:31.143634 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls podName:339269b5-9c82-4a6c-83c8-02f76531493c nodeName:}" failed. No retries permitted until 2025-12-12 00:40:32.143611598 +0000 UTC m=+2066.921412411 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" (UID: "339269b5-9c82-4a6c-83c8-02f76531493c") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 00:40:31 crc kubenswrapper[4917]: I1212 00:40:31.932042 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" event={"ID":"1a020119-1f66-4f57-be67-c2a2b91afda1","Type":"ContainerStarted","Data":"828f135fa098da18e897b8c107d56b5644e42527371f86c57bced638600bda5d"} Dec 12 00:40:31 crc kubenswrapper[4917]: I1212 00:40:31.959952 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.674416866 podStartE2EDuration="36.959931031s" podCreationTimestamp="2025-12-12 00:39:55 +0000 UTC" firstStartedPulling="2025-12-12 00:39:59.066394621 +0000 UTC m=+2033.844195434" lastFinishedPulling="2025-12-12 00:40:30.351908776 +0000 UTC m=+2065.129709599" observedRunningTime="2025-12-12 00:40:31.956593573 +0000 UTC m=+2066.734394396" watchObservedRunningTime="2025-12-12 00:40:31.959931031 +0000 UTC m=+2066.737731844" Dec 12 00:40:32 crc kubenswrapper[4917]: I1212 00:40:32.174947 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:32 crc kubenswrapper[4917]: E1212 00:40:32.175235 4917 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 00:40:32 crc kubenswrapper[4917]: E1212 00:40:32.175297 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls podName:339269b5-9c82-4a6c-83c8-02f76531493c nodeName:}" failed. No retries permitted until 2025-12-12 00:40:34.175278489 +0000 UTC m=+2068.953079302 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" (UID: "339269b5-9c82-4a6c-83c8-02f76531493c") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 00:40:33 crc kubenswrapper[4917]: I1212 00:40:33.796893 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Dec 12 00:40:34 crc kubenswrapper[4917]: I1212 00:40:34.207333 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:34 crc kubenswrapper[4917]: I1212 00:40:34.214058 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/339269b5-9c82-4a6c-83c8-02f76531493c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v\" (UID: \"339269b5-9c82-4a6c-83c8-02f76531493c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:34 crc kubenswrapper[4917]: I1212 00:40:34.422268 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.017453 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7"] Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.019175 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.021849 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.022708 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.035476 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7"] Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.123192 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.123487 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kgn7\" (UniqueName: \"kubernetes.io/projected/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-kube-api-access-9kgn7\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.123530 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.123548 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.123575 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.224567 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kgn7\" (UniqueName: \"kubernetes.io/projected/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-kube-api-access-9kgn7\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: E1212 00:40:35.225003 4917 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.224638 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: E1212 00:40:35.227185 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-default-cloud1-sens-meter-proxy-tls podName:fe8e080e-f50f-4d62-b8bf-db02d45c9dd9 nodeName:}" failed. No retries permitted until 2025-12-12 00:40:35.725043926 +0000 UTC m=+2070.502844739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" (UID: "fe8e080e-f50f-4d62-b8bf-db02d45c9dd9") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.227218 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.227322 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.227442 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.228420 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.229250 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.243566 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kgn7\" (UniqueName: \"kubernetes.io/projected/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-kube-api-access-9kgn7\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.246479 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: I1212 00:40:35.736194 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:35 crc kubenswrapper[4917]: E1212 00:40:35.738169 4917 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 00:40:35 crc kubenswrapper[4917]: E1212 00:40:35.738351 4917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-default-cloud1-sens-meter-proxy-tls podName:fe8e080e-f50f-4d62-b8bf-db02d45c9dd9 nodeName:}" failed. No retries permitted until 2025-12-12 00:40:36.738325845 +0000 UTC m=+2071.516126648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" (UID: "fe8e080e-f50f-4d62-b8bf-db02d45c9dd9") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 00:40:36 crc kubenswrapper[4917]: I1212 00:40:36.755614 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:36 crc kubenswrapper[4917]: I1212 00:40:36.760803 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe8e080e-f50f-4d62-b8bf-db02d45c9dd9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7\" (UID: \"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:36 crc kubenswrapper[4917]: I1212 00:40:36.836627 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" Dec 12 00:40:39 crc kubenswrapper[4917]: I1212 00:40:39.874985 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v"] Dec 12 00:40:39 crc kubenswrapper[4917]: W1212 00:40:39.890837 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod339269b5_9c82_4a6c_83c8_02f76531493c.slice/crio-b121e39042a39bf3c6248c6aca7b6dc95b496165ac145988fed2da09b3814443 WatchSource:0}: Error finding container b121e39042a39bf3c6248c6aca7b6dc95b496165ac145988fed2da09b3814443: Status 404 returned error can't find the container with id b121e39042a39bf3c6248c6aca7b6dc95b496165ac145988fed2da09b3814443 Dec 12 00:40:40 crc kubenswrapper[4917]: I1212 00:40:40.003340 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"337f1b5b-cd54-4c3e-98ef-2c29e8019998","Type":"ContainerStarted","Data":"899a97c1e9376473db0dd1944d1bbb70f5c6cc786a7d69829b9441460ccdf688"} Dec 12 00:40:40 crc kubenswrapper[4917]: I1212 00:40:40.017196 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" event={"ID":"339269b5-9c82-4a6c-83c8-02f76531493c","Type":"ContainerStarted","Data":"b121e39042a39bf3c6248c6aca7b6dc95b496165ac145988fed2da09b3814443"} Dec 12 00:40:40 crc kubenswrapper[4917]: I1212 00:40:40.026630 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" event={"ID":"1a020119-1f66-4f57-be67-c2a2b91afda1","Type":"ContainerStarted","Data":"fadd718aa6b190606e08a0542ecdfccf4645c63db6429173740aefe173e9c66f"} Dec 12 00:40:40 crc kubenswrapper[4917]: I1212 00:40:40.035136 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7"] Dec 12 00:40:41 crc kubenswrapper[4917]: I1212 00:40:41.043737 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" event={"ID":"339269b5-9c82-4a6c-83c8-02f76531493c","Type":"ContainerStarted","Data":"5fa68d26a3c17cf08e9abe9e683570a4acc4e0968d267ddb33b80bf79a5fad11"} Dec 12 00:40:41 crc kubenswrapper[4917]: I1212 00:40:41.044173 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" event={"ID":"339269b5-9c82-4a6c-83c8-02f76531493c","Type":"ContainerStarted","Data":"b48f21dbf5bb7bc8129c547a54246debd6af28b37cb459a0cf95d1085a2575b1"} Dec 12 00:40:41 crc kubenswrapper[4917]: I1212 00:40:41.049449 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" event={"ID":"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9","Type":"ContainerStarted","Data":"6e18ce54ade7a345f7221f60a9e153cc660b1601cd3e90fa890a6af9443a817f"} Dec 12 00:40:42 crc kubenswrapper[4917]: I1212 00:40:42.064006 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" event={"ID":"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9","Type":"ContainerStarted","Data":"0369a9d180a356edc438ea30c697b586072c9432f883952d0be944d274819ee5"} Dec 12 00:40:42 crc kubenswrapper[4917]: I1212 00:40:42.064395 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" event={"ID":"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9","Type":"ContainerStarted","Data":"0b30d81ea523fea975af4ce29c0d9e0cc8960ee09e24ec5143c9592537cee90c"} Dec 12 00:40:43 crc kubenswrapper[4917]: I1212 00:40:43.076744 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"337f1b5b-cd54-4c3e-98ef-2c29e8019998","Type":"ContainerStarted","Data":"4b49263bfc2ddfb87104c2987042cd6da51a2e1f2d435d94b2077b9de9b6b398"} Dec 12 00:40:43 crc kubenswrapper[4917]: I1212 00:40:43.796896 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 12 00:40:43 crc kubenswrapper[4917]: I1212 00:40:43.844465 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.136877 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc"] Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.138183 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.141822 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.142107 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.155140 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.158163 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc"] Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.222808 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1455d428-63a6-4c87-8a1d-958b4b5c1870-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.222894 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1455d428-63a6-4c87-8a1d-958b4b5c1870-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.222971 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bbk\" (UniqueName: \"kubernetes.io/projected/1455d428-63a6-4c87-8a1d-958b4b5c1870-kube-api-access-x7bbk\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.224304 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/1455d428-63a6-4c87-8a1d-958b4b5c1870-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.325684 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/1455d428-63a6-4c87-8a1d-958b4b5c1870-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.325758 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1455d428-63a6-4c87-8a1d-958b4b5c1870-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.325804 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1455d428-63a6-4c87-8a1d-958b4b5c1870-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.325866 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7bbk\" (UniqueName: \"kubernetes.io/projected/1455d428-63a6-4c87-8a1d-958b4b5c1870-kube-api-access-x7bbk\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.328261 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1455d428-63a6-4c87-8a1d-958b4b5c1870-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.328337 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1455d428-63a6-4c87-8a1d-958b4b5c1870-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.333749 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/1455d428-63a6-4c87-8a1d-958b4b5c1870-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.345977 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7bbk\" (UniqueName: \"kubernetes.io/projected/1455d428-63a6-4c87-8a1d-958b4b5c1870-kube-api-access-x7bbk\") pod \"default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc\" (UID: \"1455d428-63a6-4c87-8a1d-958b4b5c1870\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:44 crc kubenswrapper[4917]: I1212 00:40:44.459176 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.586501 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7"] Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.588210 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.593581 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.600762 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7"] Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.676770 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c2fa68d-9bd5-4d49-8733-79eada3821f0-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.677040 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7c2fa68d-9bd5-4d49-8733-79eada3821f0-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.677264 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7c2fa68d-9bd5-4d49-8733-79eada3821f0-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.677374 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg78s\" (UniqueName: \"kubernetes.io/projected/7c2fa68d-9bd5-4d49-8733-79eada3821f0-kube-api-access-vg78s\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.778620 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c2fa68d-9bd5-4d49-8733-79eada3821f0-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.778741 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7c2fa68d-9bd5-4d49-8733-79eada3821f0-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.778825 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7c2fa68d-9bd5-4d49-8733-79eada3821f0-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.778881 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg78s\" (UniqueName: \"kubernetes.io/projected/7c2fa68d-9bd5-4d49-8733-79eada3821f0-kube-api-access-vg78s\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.779204 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c2fa68d-9bd5-4d49-8733-79eada3821f0-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.779847 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7c2fa68d-9bd5-4d49-8733-79eada3821f0-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.788714 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7c2fa68d-9bd5-4d49-8733-79eada3821f0-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.802347 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg78s\" (UniqueName: \"kubernetes.io/projected/7c2fa68d-9bd5-4d49-8733-79eada3821f0-kube-api-access-vg78s\") pod \"default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7\" (UID: \"7c2fa68d-9bd5-4d49-8733-79eada3821f0\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:47 crc kubenswrapper[4917]: I1212 00:40:47.955326 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" Dec 12 00:40:54 crc kubenswrapper[4917]: I1212 00:40:54.355406 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7"] Dec 12 00:40:54 crc kubenswrapper[4917]: I1212 00:40:54.627422 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc"] Dec 12 00:40:54 crc kubenswrapper[4917]: W1212 00:40:54.637618 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1455d428_63a6_4c87_8a1d_958b4b5c1870.slice/crio-38256174b727418ecb9f4317344966c59996e1d6d75429eb196c3f8e6f8a80b2 WatchSource:0}: Error finding container 38256174b727418ecb9f4317344966c59996e1d6d75429eb196c3f8e6f8a80b2: Status 404 returned error can't find the container with id 38256174b727418ecb9f4317344966c59996e1d6d75429eb196c3f8e6f8a80b2 Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.174960 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" event={"ID":"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9","Type":"ContainerStarted","Data":"23532e4988af8daf35d3030a66767da34fd68fdb7b06550303f33d2a76ac2f0e"} Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.178403 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" event={"ID":"1455d428-63a6-4c87-8a1d-958b4b5c1870","Type":"ContainerStarted","Data":"db762187dc91bb83accf2ccdae388658b721222ba75df3438efb906c4e599f6c"} Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.178448 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" event={"ID":"1455d428-63a6-4c87-8a1d-958b4b5c1870","Type":"ContainerStarted","Data":"f0c657c86f84716e4dd1f32bbf9a5464bb9b35b344de17c5e0465356ef5547ba"} Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.178460 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" event={"ID":"1455d428-63a6-4c87-8a1d-958b4b5c1870","Type":"ContainerStarted","Data":"38256174b727418ecb9f4317344966c59996e1d6d75429eb196c3f8e6f8a80b2"} Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.180913 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" event={"ID":"7c2fa68d-9bd5-4d49-8733-79eada3821f0","Type":"ContainerStarted","Data":"c1b106f81d0fa56bed5f6dfe8183f9d3c23a9099819c6d85b3d9b022f9a7b536"} Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.180993 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" event={"ID":"7c2fa68d-9bd5-4d49-8733-79eada3821f0","Type":"ContainerStarted","Data":"2ff58277c2cbe0e6e41e6281f536967a32b10345b5e4d4863d310c7d729a8f24"} Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.181007 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" event={"ID":"7c2fa68d-9bd5-4d49-8733-79eada3821f0","Type":"ContainerStarted","Data":"29b06392dc6fb33eeb526b5611e89ddac8c05e3f582361e78f6a78fa98a46ea3"} Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.191445 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" event={"ID":"1a020119-1f66-4f57-be67-c2a2b91afda1","Type":"ContainerStarted","Data":"66e3b5f44f13f28428a6a02f1cf75866ef9586195a12eff01ce37141569d3352"} Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.196410 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"337f1b5b-cd54-4c3e-98ef-2c29e8019998","Type":"ContainerStarted","Data":"6508efe839e7ff6741d2dc3ab689007b1256dea7fe5d061c682d2018f5f1e63d"} Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.198350 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" podStartSLOduration=6.953624792 podStartE2EDuration="21.198327476s" podCreationTimestamp="2025-12-12 00:40:34 +0000 UTC" firstStartedPulling="2025-12-12 00:40:40.055762966 +0000 UTC m=+2074.833563779" lastFinishedPulling="2025-12-12 00:40:54.30046565 +0000 UTC m=+2089.078266463" observedRunningTime="2025-12-12 00:40:55.194479774 +0000 UTC m=+2089.972280597" watchObservedRunningTime="2025-12-12 00:40:55.198327476 +0000 UTC m=+2089.976128289" Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.203445 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" event={"ID":"339269b5-9c82-4a6c-83c8-02f76531493c","Type":"ContainerStarted","Data":"056a9162e1a9a0933d52b5b88a9fc8a8bab7f84e9d9f68d89efd699ac996a215"} Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.217219 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" podStartSLOduration=10.891371718 podStartE2EDuration="11.217192525s" podCreationTimestamp="2025-12-12 00:40:44 +0000 UTC" firstStartedPulling="2025-12-12 00:40:54.642265568 +0000 UTC m=+2089.420066381" lastFinishedPulling="2025-12-12 00:40:54.968086365 +0000 UTC m=+2089.745887188" observedRunningTime="2025-12-12 00:40:55.213427345 +0000 UTC m=+2089.991228178" watchObservedRunningTime="2025-12-12 00:40:55.217192525 +0000 UTC m=+2089.994993338" Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.242306 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" podStartSLOduration=3.481271347 podStartE2EDuration="27.242279418s" podCreationTimestamp="2025-12-12 00:40:28 +0000 UTC" firstStartedPulling="2025-12-12 00:40:30.501324382 +0000 UTC m=+2065.279125195" lastFinishedPulling="2025-12-12 00:40:54.262332453 +0000 UTC m=+2089.040133266" observedRunningTime="2025-12-12 00:40:55.237267715 +0000 UTC m=+2090.015068548" watchObservedRunningTime="2025-12-12 00:40:55.242279418 +0000 UTC m=+2090.020080251" Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.265946 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" podStartSLOduration=7.949413061 podStartE2EDuration="8.265923981s" podCreationTimestamp="2025-12-12 00:40:47 +0000 UTC" firstStartedPulling="2025-12-12 00:40:54.378402009 +0000 UTC m=+2089.156202822" lastFinishedPulling="2025-12-12 00:40:54.694912929 +0000 UTC m=+2089.472713742" observedRunningTime="2025-12-12 00:40:55.260891699 +0000 UTC m=+2090.038692532" watchObservedRunningTime="2025-12-12 00:40:55.265923981 +0000 UTC m=+2090.043724814" Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.290514 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=21.994004971 podStartE2EDuration="45.290489341s" podCreationTimestamp="2025-12-12 00:40:10 +0000 UTC" firstStartedPulling="2025-12-12 00:40:30.922360944 +0000 UTC m=+2065.700161757" lastFinishedPulling="2025-12-12 00:40:54.218845324 +0000 UTC m=+2088.996646127" observedRunningTime="2025-12-12 00:40:55.281782051 +0000 UTC m=+2090.059582884" watchObservedRunningTime="2025-12-12 00:40:55.290489341 +0000 UTC m=+2090.068290174" Dec 12 00:40:55 crc kubenswrapper[4917]: I1212 00:40:55.311187 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" podStartSLOduration=10.941634166 podStartE2EDuration="25.311162077s" podCreationTimestamp="2025-12-12 00:40:30 +0000 UTC" firstStartedPulling="2025-12-12 00:40:39.894003064 +0000 UTC m=+2074.671803887" lastFinishedPulling="2025-12-12 00:40:54.263530985 +0000 UTC m=+2089.041331798" observedRunningTime="2025-12-12 00:40:55.30713101 +0000 UTC m=+2090.084931823" watchObservedRunningTime="2025-12-12 00:40:55.311162077 +0000 UTC m=+2090.088962890" Dec 12 00:40:59 crc kubenswrapper[4917]: I1212 00:40:59.638816 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:40:59 crc kubenswrapper[4917]: I1212 00:40:59.639468 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:41:02 crc kubenswrapper[4917]: I1212 00:41:02.652536 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t8tz6"] Dec 12 00:41:02 crc kubenswrapper[4917]: I1212 00:41:02.653141 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" podUID="ddf6e524-5c84-416a-a180-7e8ecffa312b" containerName="default-interconnect" containerID="cri-o://9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575" gracePeriod=30 Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.074532 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.237202 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-credentials\") pod \"ddf6e524-5c84-416a-a180-7e8ecffa312b\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.237290 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-credentials\") pod \"ddf6e524-5c84-416a-a180-7e8ecffa312b\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.237331 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-config\") pod \"ddf6e524-5c84-416a-a180-7e8ecffa312b\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.237484 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-ca\") pod \"ddf6e524-5c84-416a-a180-7e8ecffa312b\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.237530 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-users\") pod \"ddf6e524-5c84-416a-a180-7e8ecffa312b\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.237553 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-ca\") pod \"ddf6e524-5c84-416a-a180-7e8ecffa312b\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.237598 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdjzw\" (UniqueName: \"kubernetes.io/projected/ddf6e524-5c84-416a-a180-7e8ecffa312b-kube-api-access-hdjzw\") pod \"ddf6e524-5c84-416a-a180-7e8ecffa312b\" (UID: \"ddf6e524-5c84-416a-a180-7e8ecffa312b\") " Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.239142 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "ddf6e524-5c84-416a-a180-7e8ecffa312b" (UID: "ddf6e524-5c84-416a-a180-7e8ecffa312b"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.246008 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "ddf6e524-5c84-416a-a180-7e8ecffa312b" (UID: "ddf6e524-5c84-416a-a180-7e8ecffa312b"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.246041 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf6e524-5c84-416a-a180-7e8ecffa312b-kube-api-access-hdjzw" (OuterVolumeSpecName: "kube-api-access-hdjzw") pod "ddf6e524-5c84-416a-a180-7e8ecffa312b" (UID: "ddf6e524-5c84-416a-a180-7e8ecffa312b"). InnerVolumeSpecName "kube-api-access-hdjzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.246035 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "ddf6e524-5c84-416a-a180-7e8ecffa312b" (UID: "ddf6e524-5c84-416a-a180-7e8ecffa312b"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.246403 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "ddf6e524-5c84-416a-a180-7e8ecffa312b" (UID: "ddf6e524-5c84-416a-a180-7e8ecffa312b"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.246954 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "ddf6e524-5c84-416a-a180-7e8ecffa312b" (UID: "ddf6e524-5c84-416a-a180-7e8ecffa312b"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.249731 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "ddf6e524-5c84-416a-a180-7e8ecffa312b" (UID: "ddf6e524-5c84-416a-a180-7e8ecffa312b"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.265298 4917 generic.go:334] "Generic (PLEG): container finished" podID="ddf6e524-5c84-416a-a180-7e8ecffa312b" containerID="9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575" exitCode=0 Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.265379 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" event={"ID":"ddf6e524-5c84-416a-a180-7e8ecffa312b","Type":"ContainerDied","Data":"9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575"} Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.265424 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" event={"ID":"ddf6e524-5c84-416a-a180-7e8ecffa312b","Type":"ContainerDied","Data":"f6b16eecafc6a7f35069801f4e7d2e459701e0d985d2734154e91065e4959715"} Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.265541 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-t8tz6" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.265823 4917 scope.go:117] "RemoveContainer" containerID="9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.274856 4917 generic.go:334] "Generic (PLEG): container finished" podID="1a020119-1f66-4f57-be67-c2a2b91afda1" containerID="fadd718aa6b190606e08a0542ecdfccf4645c63db6429173740aefe173e9c66f" exitCode=0 Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.274951 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" event={"ID":"1a020119-1f66-4f57-be67-c2a2b91afda1","Type":"ContainerDied","Data":"fadd718aa6b190606e08a0542ecdfccf4645c63db6429173740aefe173e9c66f"} Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.276046 4917 scope.go:117] "RemoveContainer" containerID="fadd718aa6b190606e08a0542ecdfccf4645c63db6429173740aefe173e9c66f" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.298081 4917 generic.go:334] "Generic (PLEG): container finished" podID="339269b5-9c82-4a6c-83c8-02f76531493c" containerID="5fa68d26a3c17cf08e9abe9e683570a4acc4e0968d267ddb33b80bf79a5fad11" exitCode=0 Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.298206 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" event={"ID":"339269b5-9c82-4a6c-83c8-02f76531493c","Type":"ContainerDied","Data":"5fa68d26a3c17cf08e9abe9e683570a4acc4e0968d267ddb33b80bf79a5fad11"} Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.298842 4917 scope.go:117] "RemoveContainer" containerID="5fa68d26a3c17cf08e9abe9e683570a4acc4e0968d267ddb33b80bf79a5fad11" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.318974 4917 generic.go:334] "Generic (PLEG): container finished" podID="1455d428-63a6-4c87-8a1d-958b4b5c1870" containerID="f0c657c86f84716e4dd1f32bbf9a5464bb9b35b344de17c5e0465356ef5547ba" exitCode=0 Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.319046 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" event={"ID":"1455d428-63a6-4c87-8a1d-958b4b5c1870","Type":"ContainerDied","Data":"f0c657c86f84716e4dd1f32bbf9a5464bb9b35b344de17c5e0465356ef5547ba"} Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.319809 4917 scope.go:117] "RemoveContainer" containerID="f0c657c86f84716e4dd1f32bbf9a5464bb9b35b344de17c5e0465356ef5547ba" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.340354 4917 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.340423 4917 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.340440 4917 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.340457 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdjzw\" (UniqueName: \"kubernetes.io/projected/ddf6e524-5c84-416a-a180-7e8ecffa312b-kube-api-access-hdjzw\") on node \"crc\" DevicePath \"\"" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.340472 4917 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.340486 4917 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ddf6e524-5c84-416a-a180-7e8ecffa312b-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.340501 4917 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ddf6e524-5c84-416a-a180-7e8ecffa312b-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.345410 4917 scope.go:117] "RemoveContainer" containerID="9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575" Dec 12 00:41:03 crc kubenswrapper[4917]: E1212 00:41:03.350691 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575\": container with ID starting with 9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575 not found: ID does not exist" containerID="9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.350755 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575"} err="failed to get container status \"9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575\": rpc error: code = NotFound desc = could not find container \"9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575\": container with ID starting with 9a91469cd4dfa3d9932707b0022e0d7c4f59e23d0a04904588e90faa88530575 not found: ID does not exist" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.411506 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t8tz6"] Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.426877 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t8tz6"] Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.626907 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddf6e524-5c84-416a-a180-7e8ecffa312b" path="/var/lib/kubelet/pods/ddf6e524-5c84-416a-a180-7e8ecffa312b/volumes" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.899190 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-7cxmg"] Dec 12 00:41:03 crc kubenswrapper[4917]: E1212 00:41:03.899785 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf6e524-5c84-416a-a180-7e8ecffa312b" containerName="default-interconnect" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.899800 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf6e524-5c84-416a-a180-7e8ecffa312b" containerName="default-interconnect" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.899923 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf6e524-5c84-416a-a180-7e8ecffa312b" containerName="default-interconnect" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.900456 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.903175 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.903236 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.907822 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.908707 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-j9j5g" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.908796 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.908863 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.909033 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.920062 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-7cxmg"] Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.951376 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6ctz\" (UniqueName: \"kubernetes.io/projected/1600217d-e49f-4aa7-8be8-6dff2c94407b-kube-api-access-v6ctz\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.951458 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/1600217d-e49f-4aa7-8be8-6dff2c94407b-sasl-config\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.951501 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.951548 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-sasl-users\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.951574 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.951609 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:03 crc kubenswrapper[4917]: I1212 00:41:03.951814 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.053006 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.053094 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.053152 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6ctz\" (UniqueName: \"kubernetes.io/projected/1600217d-e49f-4aa7-8be8-6dff2c94407b-kube-api-access-v6ctz\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.053198 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/1600217d-e49f-4aa7-8be8-6dff2c94407b-sasl-config\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.053240 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.053287 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-sasl-users\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.053318 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.054542 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/1600217d-e49f-4aa7-8be8-6dff2c94407b-sasl-config\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.059469 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.060327 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.067531 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.069933 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.072050 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/1600217d-e49f-4aa7-8be8-6dff2c94407b-sasl-users\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.075860 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6ctz\" (UniqueName: \"kubernetes.io/projected/1600217d-e49f-4aa7-8be8-6dff2c94407b-kube-api-access-v6ctz\") pod \"default-interconnect-68864d46cb-7cxmg\" (UID: \"1600217d-e49f-4aa7-8be8-6dff2c94407b\") " pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.228788 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" Dec 12 00:41:04 crc kubenswrapper[4917]: I1212 00:41:04.678360 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-7cxmg"] Dec 12 00:41:04 crc kubenswrapper[4917]: W1212 00:41:04.685864 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1600217d_e49f_4aa7_8be8_6dff2c94407b.slice/crio-8f619eda059a65c221cfea38d8686e7c37c2016dc0c3466831b805babf348b07 WatchSource:0}: Error finding container 8f619eda059a65c221cfea38d8686e7c37c2016dc0c3466831b805babf348b07: Status 404 returned error can't find the container with id 8f619eda059a65c221cfea38d8686e7c37c2016dc0c3466831b805babf348b07 Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.349630 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" event={"ID":"1a020119-1f66-4f57-be67-c2a2b91afda1","Type":"ContainerStarted","Data":"54e28017a5ebd77ae9296eaa83e5ecbaf976a9039e4f72c30b122dff186d4936"} Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.351132 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" event={"ID":"1600217d-e49f-4aa7-8be8-6dff2c94407b","Type":"ContainerStarted","Data":"e461d020ec8166f6c66a35ce8a7dee70dd7bbcee7273741d8da641d48e137560"} Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.351162 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" event={"ID":"1600217d-e49f-4aa7-8be8-6dff2c94407b","Type":"ContainerStarted","Data":"8f619eda059a65c221cfea38d8686e7c37c2016dc0c3466831b805babf348b07"} Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.354504 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" event={"ID":"339269b5-9c82-4a6c-83c8-02f76531493c","Type":"ContainerStarted","Data":"8eb2a20543d110d1e959a0175e8e90d98a5d1df9e21238c8f225789fbbd8e021"} Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.356852 4917 generic.go:334] "Generic (PLEG): container finished" podID="fe8e080e-f50f-4d62-b8bf-db02d45c9dd9" containerID="0369a9d180a356edc438ea30c697b586072c9432f883952d0be944d274819ee5" exitCode=0 Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.356932 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" event={"ID":"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9","Type":"ContainerDied","Data":"0369a9d180a356edc438ea30c697b586072c9432f883952d0be944d274819ee5"} Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.357941 4917 scope.go:117] "RemoveContainer" containerID="0369a9d180a356edc438ea30c697b586072c9432f883952d0be944d274819ee5" Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.359733 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" event={"ID":"1455d428-63a6-4c87-8a1d-958b4b5c1870","Type":"ContainerStarted","Data":"8c125995b9c0344566a68ff3c4259af41cad7e429a34f8d9b720b72e1bce14ed"} Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.363211 4917 generic.go:334] "Generic (PLEG): container finished" podID="7c2fa68d-9bd5-4d49-8733-79eada3821f0" containerID="2ff58277c2cbe0e6e41e6281f536967a32b10345b5e4d4863d310c7d729a8f24" exitCode=0 Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.363283 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" event={"ID":"7c2fa68d-9bd5-4d49-8733-79eada3821f0","Type":"ContainerDied","Data":"2ff58277c2cbe0e6e41e6281f536967a32b10345b5e4d4863d310c7d729a8f24"} Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.363973 4917 scope.go:117] "RemoveContainer" containerID="2ff58277c2cbe0e6e41e6281f536967a32b10345b5e4d4863d310c7d729a8f24" Dec 12 00:41:05 crc kubenswrapper[4917]: I1212 00:41:05.521188 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-7cxmg" podStartSLOduration=3.521156677 podStartE2EDuration="3.521156677s" podCreationTimestamp="2025-12-12 00:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:41:05.517083009 +0000 UTC m=+2100.294883832" watchObservedRunningTime="2025-12-12 00:41:05.521156677 +0000 UTC m=+2100.298957490" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.069334 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.071087 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.079767 4917 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.080008 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.083083 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.196743 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr6mr\" (UniqueName: \"kubernetes.io/projected/1bc83425-dbad-4b59-8622-1627e4e724f8-kube-api-access-pr6mr\") pod \"qdr-test\" (UID: \"1bc83425-dbad-4b59-8622-1627e4e724f8\") " pod="service-telemetry/qdr-test" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.196805 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/1bc83425-dbad-4b59-8622-1627e4e724f8-qdr-test-config\") pod \"qdr-test\" (UID: \"1bc83425-dbad-4b59-8622-1627e4e724f8\") " pod="service-telemetry/qdr-test" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.196847 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/1bc83425-dbad-4b59-8622-1627e4e724f8-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"1bc83425-dbad-4b59-8622-1627e4e724f8\") " pod="service-telemetry/qdr-test" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.298667 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr6mr\" (UniqueName: \"kubernetes.io/projected/1bc83425-dbad-4b59-8622-1627e4e724f8-kube-api-access-pr6mr\") pod \"qdr-test\" (UID: \"1bc83425-dbad-4b59-8622-1627e4e724f8\") " pod="service-telemetry/qdr-test" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.298951 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/1bc83425-dbad-4b59-8622-1627e4e724f8-qdr-test-config\") pod \"qdr-test\" (UID: \"1bc83425-dbad-4b59-8622-1627e4e724f8\") " pod="service-telemetry/qdr-test" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.298997 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/1bc83425-dbad-4b59-8622-1627e4e724f8-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"1bc83425-dbad-4b59-8622-1627e4e724f8\") " pod="service-telemetry/qdr-test" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.307027 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/1bc83425-dbad-4b59-8622-1627e4e724f8-qdr-test-config\") pod \"qdr-test\" (UID: \"1bc83425-dbad-4b59-8622-1627e4e724f8\") " pod="service-telemetry/qdr-test" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.322763 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/1bc83425-dbad-4b59-8622-1627e4e724f8-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"1bc83425-dbad-4b59-8622-1627e4e724f8\") " pod="service-telemetry/qdr-test" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.327385 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr6mr\" (UniqueName: \"kubernetes.io/projected/1bc83425-dbad-4b59-8622-1627e4e724f8-kube-api-access-pr6mr\") pod \"qdr-test\" (UID: \"1bc83425-dbad-4b59-8622-1627e4e724f8\") " pod="service-telemetry/qdr-test" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.371544 4917 generic.go:334] "Generic (PLEG): container finished" podID="339269b5-9c82-4a6c-83c8-02f76531493c" containerID="8eb2a20543d110d1e959a0175e8e90d98a5d1df9e21238c8f225789fbbd8e021" exitCode=0 Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.371611 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" event={"ID":"339269b5-9c82-4a6c-83c8-02f76531493c","Type":"ContainerDied","Data":"8eb2a20543d110d1e959a0175e8e90d98a5d1df9e21238c8f225789fbbd8e021"} Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.371670 4917 scope.go:117] "RemoveContainer" containerID="5fa68d26a3c17cf08e9abe9e683570a4acc4e0968d267ddb33b80bf79a5fad11" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.372222 4917 scope.go:117] "RemoveContainer" containerID="8eb2a20543d110d1e959a0175e8e90d98a5d1df9e21238c8f225789fbbd8e021" Dec 12 00:41:06 crc kubenswrapper[4917]: E1212 00:41:06.372499 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v_service-telemetry(339269b5-9c82-4a6c-83c8-02f76531493c)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" podUID="339269b5-9c82-4a6c-83c8-02f76531493c" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.377724 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" event={"ID":"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9","Type":"ContainerStarted","Data":"c2bc3475ef2e4dae54df5f1c6c9c6548b12a4970661bed8a308203bfb4fe54f0"} Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.381907 4917 generic.go:334] "Generic (PLEG): container finished" podID="1455d428-63a6-4c87-8a1d-958b4b5c1870" containerID="8c125995b9c0344566a68ff3c4259af41cad7e429a34f8d9b720b72e1bce14ed" exitCode=0 Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.381992 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" event={"ID":"1455d428-63a6-4c87-8a1d-958b4b5c1870","Type":"ContainerDied","Data":"8c125995b9c0344566a68ff3c4259af41cad7e429a34f8d9b720b72e1bce14ed"} Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.382404 4917 scope.go:117] "RemoveContainer" containerID="8c125995b9c0344566a68ff3c4259af41cad7e429a34f8d9b720b72e1bce14ed" Dec 12 00:41:06 crc kubenswrapper[4917]: E1212 00:41:06.382600 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc_service-telemetry(1455d428-63a6-4c87-8a1d-958b4b5c1870)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" podUID="1455d428-63a6-4c87-8a1d-958b4b5c1870" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.385031 4917 generic.go:334] "Generic (PLEG): container finished" podID="1a020119-1f66-4f57-be67-c2a2b91afda1" containerID="54e28017a5ebd77ae9296eaa83e5ecbaf976a9039e4f72c30b122dff186d4936" exitCode=0 Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.385102 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" event={"ID":"1a020119-1f66-4f57-be67-c2a2b91afda1","Type":"ContainerDied","Data":"54e28017a5ebd77ae9296eaa83e5ecbaf976a9039e4f72c30b122dff186d4936"} Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.385747 4917 scope.go:117] "RemoveContainer" containerID="54e28017a5ebd77ae9296eaa83e5ecbaf976a9039e4f72c30b122dff186d4936" Dec 12 00:41:06 crc kubenswrapper[4917]: E1212 00:41:06.385948 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74_service-telemetry(1a020119-1f66-4f57-be67-c2a2b91afda1)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" podUID="1a020119-1f66-4f57-be67-c2a2b91afda1" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.402226 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.726588 4917 scope.go:117] "RemoveContainer" containerID="f0c657c86f84716e4dd1f32bbf9a5464bb9b35b344de17c5e0465356ef5547ba" Dec 12 00:41:06 crc kubenswrapper[4917]: I1212 00:41:06.906835 4917 scope.go:117] "RemoveContainer" containerID="fadd718aa6b190606e08a0542ecdfccf4645c63db6429173740aefe173e9c66f" Dec 12 00:41:07 crc kubenswrapper[4917]: I1212 00:41:07.049013 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 12 00:41:07 crc kubenswrapper[4917]: W1212 00:41:07.052446 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bc83425_dbad_4b59_8622_1627e4e724f8.slice/crio-214f69cc54e73b27782efb49b3e7d9ab85ce0ab5c3b54403f0bbea2bd2c23f27 WatchSource:0}: Error finding container 214f69cc54e73b27782efb49b3e7d9ab85ce0ab5c3b54403f0bbea2bd2c23f27: Status 404 returned error can't find the container with id 214f69cc54e73b27782efb49b3e7d9ab85ce0ab5c3b54403f0bbea2bd2c23f27 Dec 12 00:41:07 crc kubenswrapper[4917]: I1212 00:41:07.400352 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"1bc83425-dbad-4b59-8622-1627e4e724f8","Type":"ContainerStarted","Data":"214f69cc54e73b27782efb49b3e7d9ab85ce0ab5c3b54403f0bbea2bd2c23f27"} Dec 12 00:41:07 crc kubenswrapper[4917]: I1212 00:41:07.411009 4917 generic.go:334] "Generic (PLEG): container finished" podID="fe8e080e-f50f-4d62-b8bf-db02d45c9dd9" containerID="c2bc3475ef2e4dae54df5f1c6c9c6548b12a4970661bed8a308203bfb4fe54f0" exitCode=0 Dec 12 00:41:07 crc kubenswrapper[4917]: I1212 00:41:07.411107 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" event={"ID":"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9","Type":"ContainerDied","Data":"c2bc3475ef2e4dae54df5f1c6c9c6548b12a4970661bed8a308203bfb4fe54f0"} Dec 12 00:41:07 crc kubenswrapper[4917]: I1212 00:41:07.411210 4917 scope.go:117] "RemoveContainer" containerID="0369a9d180a356edc438ea30c697b586072c9432f883952d0be944d274819ee5" Dec 12 00:41:07 crc kubenswrapper[4917]: I1212 00:41:07.412221 4917 scope.go:117] "RemoveContainer" containerID="c2bc3475ef2e4dae54df5f1c6c9c6548b12a4970661bed8a308203bfb4fe54f0" Dec 12 00:41:07 crc kubenswrapper[4917]: E1212 00:41:07.412496 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7_service-telemetry(fe8e080e-f50f-4d62-b8bf-db02d45c9dd9)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" podUID="fe8e080e-f50f-4d62-b8bf-db02d45c9dd9" Dec 12 00:41:07 crc kubenswrapper[4917]: I1212 00:41:07.429185 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" event={"ID":"7c2fa68d-9bd5-4d49-8733-79eada3821f0","Type":"ContainerStarted","Data":"39062fbd1e4856a80c937cf5718290996666676ea4eea47b6f5ae408a3590440"} Dec 12 00:41:08 crc kubenswrapper[4917]: I1212 00:41:08.443745 4917 generic.go:334] "Generic (PLEG): container finished" podID="7c2fa68d-9bd5-4d49-8733-79eada3821f0" containerID="39062fbd1e4856a80c937cf5718290996666676ea4eea47b6f5ae408a3590440" exitCode=0 Dec 12 00:41:08 crc kubenswrapper[4917]: I1212 00:41:08.443800 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" event={"ID":"7c2fa68d-9bd5-4d49-8733-79eada3821f0","Type":"ContainerDied","Data":"39062fbd1e4856a80c937cf5718290996666676ea4eea47b6f5ae408a3590440"} Dec 12 00:41:08 crc kubenswrapper[4917]: I1212 00:41:08.445210 4917 scope.go:117] "RemoveContainer" containerID="2ff58277c2cbe0e6e41e6281f536967a32b10345b5e4d4863d310c7d729a8f24" Dec 12 00:41:08 crc kubenswrapper[4917]: I1212 00:41:08.447022 4917 scope.go:117] "RemoveContainer" containerID="39062fbd1e4856a80c937cf5718290996666676ea4eea47b6f5ae408a3590440" Dec 12 00:41:08 crc kubenswrapper[4917]: E1212 00:41:08.447710 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7_service-telemetry(7c2fa68d-9bd5-4d49-8733-79eada3821f0)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" podUID="7c2fa68d-9bd5-4d49-8733-79eada3821f0" Dec 12 00:41:17 crc kubenswrapper[4917]: I1212 00:41:17.719641 4917 scope.go:117] "RemoveContainer" containerID="8eb2a20543d110d1e959a0175e8e90d98a5d1df9e21238c8f225789fbbd8e021" Dec 12 00:41:17 crc kubenswrapper[4917]: I1212 00:41:17.732991 4917 scope.go:117] "RemoveContainer" containerID="8c125995b9c0344566a68ff3c4259af41cad7e429a34f8d9b720b72e1bce14ed" Dec 12 00:41:19 crc kubenswrapper[4917]: I1212 00:41:19.603105 4917 scope.go:117] "RemoveContainer" containerID="39062fbd1e4856a80c937cf5718290996666676ea4eea47b6f5ae408a3590440" Dec 12 00:41:19 crc kubenswrapper[4917]: I1212 00:41:19.603175 4917 scope.go:117] "RemoveContainer" containerID="54e28017a5ebd77ae9296eaa83e5ecbaf976a9039e4f72c30b122dff186d4936" Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.613880 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc" event={"ID":"1455d428-63a6-4c87-8a1d-958b4b5c1870","Type":"ContainerStarted","Data":"2b416d40578f018ae5bba7751a2e44ad67739044fb58959517818173fce8031b"} Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.616450 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7" event={"ID":"7c2fa68d-9bd5-4d49-8733-79eada3821f0","Type":"ContainerStarted","Data":"5fa49fd28c0f759c1512a7ec7de1363ac2ec66b2789ef0e342d289103755fd60"} Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.619035 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74" event={"ID":"1a020119-1f66-4f57-be67-c2a2b91afda1","Type":"ContainerStarted","Data":"92939b97ba2012a7f9c6495f7697f7bc16d405295a1528e286e9c924ae6ee96d"} Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.621152 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"1bc83425-dbad-4b59-8622-1627e4e724f8","Type":"ContainerStarted","Data":"b20a8a9ecaae03a52d6e542263cca505112ed4958c4d69ee4a1146b42358a71d"} Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.624455 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v" event={"ID":"339269b5-9c82-4a6c-83c8-02f76531493c","Type":"ContainerStarted","Data":"d9d3edc4d5a4a9ba9e3e474325f92630de12f35ebc8a80cb1677e2b50a65c61b"} Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.664989 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.135668098 podStartE2EDuration="14.6649657s" podCreationTimestamp="2025-12-12 00:41:06 +0000 UTC" firstStartedPulling="2025-12-12 00:41:07.055298071 +0000 UTC m=+2101.833098884" lastFinishedPulling="2025-12-12 00:41:19.584595673 +0000 UTC m=+2114.362396486" observedRunningTime="2025-12-12 00:41:20.65814198 +0000 UTC m=+2115.435942783" watchObservedRunningTime="2025-12-12 00:41:20.6649657 +0000 UTC m=+2115.442766513" Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.943971 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-p6jrh"] Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.946261 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.954270 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.954311 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.959059 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.959148 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.959313 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.959409 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Dec 12 00:41:20 crc kubenswrapper[4917]: I1212 00:41:20.963044 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-p6jrh"] Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.251974 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-publisher\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.253259 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-healthcheck-log\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.254808 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-sensubility-config\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.254893 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwdx6\" (UniqueName: \"kubernetes.io/projected/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-kube-api-access-xwdx6\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.254927 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.254967 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-config\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.254988 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.355898 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-sensubility-config\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.355989 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwdx6\" (UniqueName: \"kubernetes.io/projected/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-kube-api-access-xwdx6\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.356016 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.356040 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-config\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.356062 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.356089 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-publisher\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.356131 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-healthcheck-log\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.357195 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-healthcheck-log\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.357793 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-sensubility-config\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.359271 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.359974 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-config\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.360109 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.361258 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-publisher\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.393369 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.395290 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.417277 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.418946 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwdx6\" (UniqueName: \"kubernetes.io/projected/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-kube-api-access-xwdx6\") pod \"stf-smoketest-smoke1-p6jrh\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.457783 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6v8\" (UniqueName: \"kubernetes.io/projected/86470490-4a92-4178-a5ae-2ae87a81065a-kube-api-access-5n6v8\") pod \"curl\" (UID: \"86470490-4a92-4178-a5ae-2ae87a81065a\") " pod="service-telemetry/curl" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.560255 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6v8\" (UniqueName: \"kubernetes.io/projected/86470490-4a92-4178-a5ae-2ae87a81065a-kube-api-access-5n6v8\") pod \"curl\" (UID: \"86470490-4a92-4178-a5ae-2ae87a81065a\") " pod="service-telemetry/curl" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.596189 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6v8\" (UniqueName: \"kubernetes.io/projected/86470490-4a92-4178-a5ae-2ae87a81065a-kube-api-access-5n6v8\") pod \"curl\" (UID: \"86470490-4a92-4178-a5ae-2ae87a81065a\") " pod="service-telemetry/curl" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.632012 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:41:21 crc kubenswrapper[4917]: I1212 00:41:21.751635 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 00:41:22 crc kubenswrapper[4917]: I1212 00:41:22.132860 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-p6jrh"] Dec 12 00:41:22 crc kubenswrapper[4917]: I1212 00:41:22.217398 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 12 00:41:22 crc kubenswrapper[4917]: I1212 00:41:22.602502 4917 scope.go:117] "RemoveContainer" containerID="c2bc3475ef2e4dae54df5f1c6c9c6548b12a4970661bed8a308203bfb4fe54f0" Dec 12 00:41:22 crc kubenswrapper[4917]: I1212 00:41:22.638703 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" event={"ID":"99f5ed26-9619-41e0-95e1-dad0e9e78fc9","Type":"ContainerStarted","Data":"f66a3d1d7bdc714dd767af57830d67f2665b7637c93b320d11673cfb45bcefd2"} Dec 12 00:41:22 crc kubenswrapper[4917]: I1212 00:41:22.639522 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"86470490-4a92-4178-a5ae-2ae87a81065a","Type":"ContainerStarted","Data":"5baae5995805b624382eb3015253b62a47209305e633aa50a23fab1be0050b77"} Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.453568 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kkxp9"] Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.455334 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.470825 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kkxp9"] Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.601933 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdwxr\" (UniqueName: \"kubernetes.io/projected/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-kube-api-access-zdwxr\") pod \"community-operators-kkxp9\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.602367 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-utilities\") pod \"community-operators-kkxp9\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.602387 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-catalog-content\") pod \"community-operators-kkxp9\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.703964 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-utilities\") pod \"community-operators-kkxp9\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.704043 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-catalog-content\") pod \"community-operators-kkxp9\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.704140 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdwxr\" (UniqueName: \"kubernetes.io/projected/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-kube-api-access-zdwxr\") pod \"community-operators-kkxp9\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.706166 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-catalog-content\") pod \"community-operators-kkxp9\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.706439 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-utilities\") pod \"community-operators-kkxp9\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.751449 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdwxr\" (UniqueName: \"kubernetes.io/projected/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-kube-api-access-zdwxr\") pod \"community-operators-kkxp9\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:23 crc kubenswrapper[4917]: I1212 00:41:23.832282 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:25 crc kubenswrapper[4917]: I1212 00:41:25.473439 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kkxp9"] Dec 12 00:41:25 crc kubenswrapper[4917]: I1212 00:41:25.831748 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7" event={"ID":"fe8e080e-f50f-4d62-b8bf-db02d45c9dd9","Type":"ContainerStarted","Data":"fea0070c7b37749a27c1e1302e7659ff6a202a60d2f8e280496b3052482d1a82"} Dec 12 00:41:26 crc kubenswrapper[4917]: W1212 00:41:26.374399 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae6d5ae_1bec_4e57_9048_147ffea5cf39.slice/crio-378fd95fe17d58ad74d2d1b1e31c8e0f427c1ea5b77de3ec4130ab0706fb99fe WatchSource:0}: Error finding container 378fd95fe17d58ad74d2d1b1e31c8e0f427c1ea5b77de3ec4130ab0706fb99fe: Status 404 returned error can't find the container with id 378fd95fe17d58ad74d2d1b1e31c8e0f427c1ea5b77de3ec4130ab0706fb99fe Dec 12 00:41:26 crc kubenswrapper[4917]: I1212 00:41:26.856104 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkxp9" event={"ID":"9ae6d5ae-1bec-4e57-9048-147ffea5cf39","Type":"ContainerStarted","Data":"378fd95fe17d58ad74d2d1b1e31c8e0f427c1ea5b77de3ec4130ab0706fb99fe"} Dec 12 00:41:27 crc kubenswrapper[4917]: I1212 00:41:27.877685 4917 generic.go:334] "Generic (PLEG): container finished" podID="86470490-4a92-4178-a5ae-2ae87a81065a" containerID="9b3bb4fcadd0fa93b537d41542c46171e14dad4b2d9ef86ab9b4fc201f049e10" exitCode=0 Dec 12 00:41:27 crc kubenswrapper[4917]: I1212 00:41:27.877769 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"86470490-4a92-4178-a5ae-2ae87a81065a","Type":"ContainerDied","Data":"9b3bb4fcadd0fa93b537d41542c46171e14dad4b2d9ef86ab9b4fc201f049e10"} Dec 12 00:41:27 crc kubenswrapper[4917]: I1212 00:41:27.884893 4917 generic.go:334] "Generic (PLEG): container finished" podID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerID="b368a9c73087873b30f4805dfe910e882ba885af42cee123a3d99eeec9427bb3" exitCode=0 Dec 12 00:41:27 crc kubenswrapper[4917]: I1212 00:41:27.884969 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkxp9" event={"ID":"9ae6d5ae-1bec-4e57-9048-147ffea5cf39","Type":"ContainerDied","Data":"b368a9c73087873b30f4805dfe910e882ba885af42cee123a3d99eeec9427bb3"} Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.862906 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vjzd9"] Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.865255 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.867573 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vjzd9"] Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.874047 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-catalog-content\") pod \"redhat-operators-vjzd9\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.874218 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8c4r\" (UniqueName: \"kubernetes.io/projected/5d6c2620-5dea-4f33-8df1-3247e350d6cd-kube-api-access-t8c4r\") pod \"redhat-operators-vjzd9\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.874260 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-utilities\") pod \"redhat-operators-vjzd9\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.918510 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkxp9" event={"ID":"9ae6d5ae-1bec-4e57-9048-147ffea5cf39","Type":"ContainerStarted","Data":"32dc4fac8cc3515c02e89f00ba0c4b32c813055c569ef31618c6b87111e3d227"} Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.982305 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8c4r\" (UniqueName: \"kubernetes.io/projected/5d6c2620-5dea-4f33-8df1-3247e350d6cd-kube-api-access-t8c4r\") pod \"redhat-operators-vjzd9\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.982366 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-utilities\") pod \"redhat-operators-vjzd9\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.982409 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-catalog-content\") pod \"redhat-operators-vjzd9\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.982975 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-catalog-content\") pod \"redhat-operators-vjzd9\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:28 crc kubenswrapper[4917]: I1212 00:41:28.984101 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-utilities\") pod \"redhat-operators-vjzd9\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:29 crc kubenswrapper[4917]: I1212 00:41:29.020811 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8c4r\" (UniqueName: \"kubernetes.io/projected/5d6c2620-5dea-4f33-8df1-3247e350d6cd-kube-api-access-t8c4r\") pod \"redhat-operators-vjzd9\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:29 crc kubenswrapper[4917]: I1212 00:41:29.203065 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:29 crc kubenswrapper[4917]: I1212 00:41:29.563089 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 00:41:29 crc kubenswrapper[4917]: I1212 00:41:29.595680 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n6v8\" (UniqueName: \"kubernetes.io/projected/86470490-4a92-4178-a5ae-2ae87a81065a-kube-api-access-5n6v8\") pod \"86470490-4a92-4178-a5ae-2ae87a81065a\" (UID: \"86470490-4a92-4178-a5ae-2ae87a81065a\") " Dec 12 00:41:29 crc kubenswrapper[4917]: I1212 00:41:29.625345 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86470490-4a92-4178-a5ae-2ae87a81065a-kube-api-access-5n6v8" (OuterVolumeSpecName: "kube-api-access-5n6v8") pod "86470490-4a92-4178-a5ae-2ae87a81065a" (UID: "86470490-4a92-4178-a5ae-2ae87a81065a"). InnerVolumeSpecName "kube-api-access-5n6v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:41:29 crc kubenswrapper[4917]: I1212 00:41:29.640063 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:41:29 crc kubenswrapper[4917]: I1212 00:41:29.640139 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:29.697774 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n6v8\" (UniqueName: \"kubernetes.io/projected/86470490-4a92-4178-a5ae-2ae87a81065a-kube-api-access-5n6v8\") on node \"crc\" DevicePath \"\"" Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:29.911553 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vjzd9"] Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:29.928266 4917 generic.go:334] "Generic (PLEG): container finished" podID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerID="32dc4fac8cc3515c02e89f00ba0c4b32c813055c569ef31618c6b87111e3d227" exitCode=0 Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:29.928359 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkxp9" event={"ID":"9ae6d5ae-1bec-4e57-9048-147ffea5cf39","Type":"ContainerDied","Data":"32dc4fac8cc3515c02e89f00ba0c4b32c813055c569ef31618c6b87111e3d227"} Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:29.934403 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjzd9" event={"ID":"5d6c2620-5dea-4f33-8df1-3247e350d6cd","Type":"ContainerStarted","Data":"ddecee0702661a42168d0829b260726ba2be2dd832f26e40e728544b107089ed"} Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:29.936587 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"86470490-4a92-4178-a5ae-2ae87a81065a","Type":"ContainerDied","Data":"5baae5995805b624382eb3015253b62a47209305e633aa50a23fab1be0050b77"} Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:29.936628 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5baae5995805b624382eb3015253b62a47209305e633aa50a23fab1be0050b77" Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:29.936674 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:30.233972 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_86470490-4a92-4178-a5ae-2ae87a81065a/curl/0.log" Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:30.490684 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-pschj_58a31d61-50bc-4a00-9040-7ece16fa7c9d/prometheus-webhook-snmp/0.log" Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:32.576804 4917 generic.go:334] "Generic (PLEG): container finished" podID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" containerID="ad2a215a2a5b319651eb60c2d6dff99a2c51c0f03b8f4fbfa61a72850b89689c" exitCode=0 Dec 12 00:41:32 crc kubenswrapper[4917]: I1212 00:41:32.576887 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjzd9" event={"ID":"5d6c2620-5dea-4f33-8df1-3247e350d6cd","Type":"ContainerDied","Data":"ad2a215a2a5b319651eb60c2d6dff99a2c51c0f03b8f4fbfa61a72850b89689c"} Dec 12 00:41:41 crc kubenswrapper[4917]: E1212 00:41:41.793801 4917 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/tripleomastercentos9/openstack-collectd:current-tripleo" Dec 12 00:41:41 crc kubenswrapper[4917]: E1212 00:41:41.794301 4917 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:smoketest-collectd,Image:quay.io/tripleomastercentos9/openstack-collectd:current-tripleo,Command:[/smoketest_collectd_entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLOUDNAME,Value:smoke1,ValueFrom:nil,},EnvVar{Name:ELASTICSEARCH_AUTH_PASS,Value:aSlpdinnUltzv8JbDkBV8va0,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_AUTH_TOKEN,Value:eyJhbGciOiJSUzI1NiIsImtpZCI6InF6SnFxNFFjbVk5VmJQZ2dNMmUxdHFmTlJlVWx4UDhSTlhIamV3RUx4WU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc2NTUwMzY2MCwiaWF0IjoxNzY1NTAwMDYwLCJpc3MiOiJodHRwczovL2t1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJqdGkiOiJhZTg0YjlhMy0wNzVjLTQwOTEtYjQ0YS0xOWJiYzBmMmQ4MWIiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InNlcnZpY2UtdGVsZW1ldHJ5Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InN0Zi1wcm9tZXRoZXVzLXJlYWRlciIsInVpZCI6IjJlNWNhYjQ3LTgxN2EtNGZjNS04NDliLTNlYTdjYTg0NDA1ZSJ9fSwibmJmIjoxNzY1NTAwMDYwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c2VydmljZS10ZWxlbWV0cnk6c3RmLXByb21ldGhldXMtcmVhZGVyIn0.Uo2C5hAhYKFlff5TQ4XsZPebmrhsaAh3Nd4oPcLtlyioOK_Xu5YnzjvAlZ6y8gNx0-L4wFAMibtYtjr1v1ExmXAz09LKVVR0E1QyBZNIKdRios-zCFQg_BcuvO_ZAIzEhX9hvBx0VpI1EDGggh8IyptHkep7ok2sIwyo_XIRdYUL_cbBQdYkKJ5JBf5NrAdlJKAy7ljGdegNWHsSm4UCKqo29vT4HYXccTdvDPjeEg1BUZkdJWGB7k-Z0SFaRzUxG0G8d8WPe808BKMURop-g2jNG7kGRnr-lfFWqfDLXU6S1tTNzUShf8FOg0ePuR4dHbrHKth_3G-DI0MVn7rcrtqSciW2d2xCqpgZ-lu4RV1b5zi7vpTBGt1OFyThrn34xKW8Z_1K64jyd45frH3Km3wLFiAGZQkY8n0Ok5p-pjqUQ8pAW_DuKpg43Lx6t6t9DNolRZaM53Q-XSLnR0AVd9M4hV3_YUP2Cvnq-pIvwEHC_7JJeO0y2Y9jYKET3vrauxEYa2v86IzaRqlEvmnQ5JESNB0l6PFkKrz_ShHU3eDFUrb7WcXKGqmUVvLMBKVVRnEVXk6UH_DH_bl0P_JRkLHHTo3JYVFEQPMJDVL58s2WsZb-YIr0NcYm4flCVIXoP2IQCTi056S6_U8K41ibq7-AyOz8xOQgNCmUUcnM0f8,ValueFrom:nil,},EnvVar{Name:OBSERVABILITY_STRATEGY,Value:<>,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:collectd-config,ReadOnly:false,MountPath:/etc/minimal-collectd.conf.template,SubPath:minimal-collectd.conf.template,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sensubility-config,ReadOnly:false,MountPath:/etc/collectd-sensubility.conf,SubPath:collectd-sensubility.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:healthcheck-log,ReadOnly:false,MountPath:/healthcheck.log,SubPath:healthcheck.log,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:collectd-entrypoint-script,ReadOnly:false,MountPath:/smoketest_collectd_entrypoint.sh,SubPath:smoketest_collectd_entrypoint.sh,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwdx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod stf-smoketest-smoke1-p6jrh_service-telemetry(99f5ed26-9619-41e0-95e1-dad0e9e78fc9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 00:41:42 crc kubenswrapper[4917]: I1212 00:41:42.670823 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkxp9" event={"ID":"9ae6d5ae-1bec-4e57-9048-147ffea5cf39","Type":"ContainerStarted","Data":"8cf2d31461d49168a335c3dfd1fd72e0f0617f2b3f501f8aea850386000321d6"} Dec 12 00:41:42 crc kubenswrapper[4917]: I1212 00:41:42.673692 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjzd9" event={"ID":"5d6c2620-5dea-4f33-8df1-3247e350d6cd","Type":"ContainerStarted","Data":"64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967"} Dec 12 00:41:42 crc kubenswrapper[4917]: I1212 00:41:42.718475 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kkxp9" podStartSLOduration=5.772917487 podStartE2EDuration="19.718452198s" podCreationTimestamp="2025-12-12 00:41:23 +0000 UTC" firstStartedPulling="2025-12-12 00:41:27.888078064 +0000 UTC m=+2122.665878887" lastFinishedPulling="2025-12-12 00:41:41.833612785 +0000 UTC m=+2136.611413598" observedRunningTime="2025-12-12 00:41:42.693376985 +0000 UTC m=+2137.471177808" watchObservedRunningTime="2025-12-12 00:41:42.718452198 +0000 UTC m=+2137.496253011" Dec 12 00:41:43 crc kubenswrapper[4917]: I1212 00:41:43.833062 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:43 crc kubenswrapper[4917]: I1212 00:41:43.833419 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:44 crc kubenswrapper[4917]: I1212 00:41:44.895522 4917 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kkxp9" podUID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerName="registry-server" probeResult="failure" output=< Dec 12 00:41:44 crc kubenswrapper[4917]: timeout: failed to connect service ":50051" within 1s Dec 12 00:41:44 crc kubenswrapper[4917]: > Dec 12 00:41:45 crc kubenswrapper[4917]: I1212 00:41:45.700085 4917 generic.go:334] "Generic (PLEG): container finished" podID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" containerID="64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967" exitCode=0 Dec 12 00:41:45 crc kubenswrapper[4917]: I1212 00:41:45.700146 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjzd9" event={"ID":"5d6c2620-5dea-4f33-8df1-3247e350d6cd","Type":"ContainerDied","Data":"64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967"} Dec 12 00:41:48 crc kubenswrapper[4917]: I1212 00:41:48.726034 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjzd9" event={"ID":"5d6c2620-5dea-4f33-8df1-3247e350d6cd","Type":"ContainerStarted","Data":"3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c"} Dec 12 00:41:50 crc kubenswrapper[4917]: I1212 00:41:50.765724 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vjzd9" podStartSLOduration=10.921688649 podStartE2EDuration="22.76569925s" podCreationTimestamp="2025-12-12 00:41:28 +0000 UTC" firstStartedPulling="2025-12-12 00:41:36.066231784 +0000 UTC m=+2130.844032597" lastFinishedPulling="2025-12-12 00:41:47.910242385 +0000 UTC m=+2142.688043198" observedRunningTime="2025-12-12 00:41:50.763735039 +0000 UTC m=+2145.541535862" watchObservedRunningTime="2025-12-12 00:41:50.76569925 +0000 UTC m=+2145.543500063" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.303126 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g8xrd"] Dec 12 00:41:51 crc kubenswrapper[4917]: E1212 00:41:51.303522 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86470490-4a92-4178-a5ae-2ae87a81065a" containerName="curl" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.303542 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="86470490-4a92-4178-a5ae-2ae87a81065a" containerName="curl" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.303765 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="86470490-4a92-4178-a5ae-2ae87a81065a" containerName="curl" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.304983 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.312154 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g8xrd"] Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.447609 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f03c62-f504-466a-87d3-15b358c3cd0e-catalog-content\") pod \"certified-operators-g8xrd\" (UID: \"d8f03c62-f504-466a-87d3-15b358c3cd0e\") " pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.447726 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f03c62-f504-466a-87d3-15b358c3cd0e-utilities\") pod \"certified-operators-g8xrd\" (UID: \"d8f03c62-f504-466a-87d3-15b358c3cd0e\") " pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.447774 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdqh\" (UniqueName: \"kubernetes.io/projected/d8f03c62-f504-466a-87d3-15b358c3cd0e-kube-api-access-xtdqh\") pod \"certified-operators-g8xrd\" (UID: \"d8f03c62-f504-466a-87d3-15b358c3cd0e\") " pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.549945 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f03c62-f504-466a-87d3-15b358c3cd0e-catalog-content\") pod \"certified-operators-g8xrd\" (UID: \"d8f03c62-f504-466a-87d3-15b358c3cd0e\") " pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.550021 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f03c62-f504-466a-87d3-15b358c3cd0e-utilities\") pod \"certified-operators-g8xrd\" (UID: \"d8f03c62-f504-466a-87d3-15b358c3cd0e\") " pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.550048 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtdqh\" (UniqueName: \"kubernetes.io/projected/d8f03c62-f504-466a-87d3-15b358c3cd0e-kube-api-access-xtdqh\") pod \"certified-operators-g8xrd\" (UID: \"d8f03c62-f504-466a-87d3-15b358c3cd0e\") " pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.550411 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f03c62-f504-466a-87d3-15b358c3cd0e-catalog-content\") pod \"certified-operators-g8xrd\" (UID: \"d8f03c62-f504-466a-87d3-15b358c3cd0e\") " pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.550693 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f03c62-f504-466a-87d3-15b358c3cd0e-utilities\") pod \"certified-operators-g8xrd\" (UID: \"d8f03c62-f504-466a-87d3-15b358c3cd0e\") " pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.569610 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtdqh\" (UniqueName: \"kubernetes.io/projected/d8f03c62-f504-466a-87d3-15b358c3cd0e-kube-api-access-xtdqh\") pod \"certified-operators-g8xrd\" (UID: \"d8f03c62-f504-466a-87d3-15b358c3cd0e\") " pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:41:51 crc kubenswrapper[4917]: I1212 00:41:51.633286 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:41:53 crc kubenswrapper[4917]: I1212 00:41:53.885821 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:53 crc kubenswrapper[4917]: I1212 00:41:53.930735 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:55 crc kubenswrapper[4917]: I1212 00:41:55.476174 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kkxp9"] Dec 12 00:41:55 crc kubenswrapper[4917]: I1212 00:41:55.476459 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kkxp9" podUID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerName="registry-server" containerID="cri-o://8cf2d31461d49168a335c3dfd1fd72e0f0617f2b3f501f8aea850386000321d6" gracePeriod=2 Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.113373 4917 generic.go:334] "Generic (PLEG): container finished" podID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerID="8cf2d31461d49168a335c3dfd1fd72e0f0617f2b3f501f8aea850386000321d6" exitCode=0 Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.113460 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkxp9" event={"ID":"9ae6d5ae-1bec-4e57-9048-147ffea5cf39","Type":"ContainerDied","Data":"8cf2d31461d49168a335c3dfd1fd72e0f0617f2b3f501f8aea850386000321d6"} Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.296623 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:56 crc kubenswrapper[4917]: E1212 00:41:56.299348 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-collectd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" podUID="99f5ed26-9619-41e0-95e1-dad0e9e78fc9" Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.459501 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-catalog-content\") pod \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.459576 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-utilities\") pod \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.459797 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdwxr\" (UniqueName: \"kubernetes.io/projected/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-kube-api-access-zdwxr\") pod \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\" (UID: \"9ae6d5ae-1bec-4e57-9048-147ffea5cf39\") " Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.461943 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-utilities" (OuterVolumeSpecName: "utilities") pod "9ae6d5ae-1bec-4e57-9048-147ffea5cf39" (UID: "9ae6d5ae-1bec-4e57-9048-147ffea5cf39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.469326 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-kube-api-access-zdwxr" (OuterVolumeSpecName: "kube-api-access-zdwxr") pod "9ae6d5ae-1bec-4e57-9048-147ffea5cf39" (UID: "9ae6d5ae-1bec-4e57-9048-147ffea5cf39"). InnerVolumeSpecName "kube-api-access-zdwxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.479479 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g8xrd"] Dec 12 00:41:56 crc kubenswrapper[4917]: W1212 00:41:56.481868 4917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8f03c62_f504_466a_87d3_15b358c3cd0e.slice/crio-0c3070d1da1cde714d799bf83ca215e2483acac623d2a4531b262211fae03625 WatchSource:0}: Error finding container 0c3070d1da1cde714d799bf83ca215e2483acac623d2a4531b262211fae03625: Status 404 returned error can't find the container with id 0c3070d1da1cde714d799bf83ca215e2483acac623d2a4531b262211fae03625 Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.513963 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ae6d5ae-1bec-4e57-9048-147ffea5cf39" (UID: "9ae6d5ae-1bec-4e57-9048-147ffea5cf39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.562262 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdwxr\" (UniqueName: \"kubernetes.io/projected/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-kube-api-access-zdwxr\") on node \"crc\" DevicePath \"\"" Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.562293 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:41:56 crc kubenswrapper[4917]: I1212 00:41:56.562303 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae6d5ae-1bec-4e57-9048-147ffea5cf39-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:41:56 crc kubenswrapper[4917]: E1212 00:41:56.701269 4917 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8f03c62_f504_466a_87d3_15b358c3cd0e.slice/crio-f3f5fd27bd60135069d448834e72214f85601fb4eef3686455ef3bf7140f74fc.scope\": RecentStats: unable to find data in memory cache]" Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.142621 4917 generic.go:334] "Generic (PLEG): container finished" podID="d8f03c62-f504-466a-87d3-15b358c3cd0e" containerID="f3f5fd27bd60135069d448834e72214f85601fb4eef3686455ef3bf7140f74fc" exitCode=0 Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.142700 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8xrd" event={"ID":"d8f03c62-f504-466a-87d3-15b358c3cd0e","Type":"ContainerDied","Data":"f3f5fd27bd60135069d448834e72214f85601fb4eef3686455ef3bf7140f74fc"} Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.143263 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8xrd" event={"ID":"d8f03c62-f504-466a-87d3-15b358c3cd0e","Type":"ContainerStarted","Data":"0c3070d1da1cde714d799bf83ca215e2483acac623d2a4531b262211fae03625"} Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.147611 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkxp9" event={"ID":"9ae6d5ae-1bec-4e57-9048-147ffea5cf39","Type":"ContainerDied","Data":"378fd95fe17d58ad74d2d1b1e31c8e0f427c1ea5b77de3ec4130ab0706fb99fe"} Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.147751 4917 scope.go:117] "RemoveContainer" containerID="8cf2d31461d49168a335c3dfd1fd72e0f0617f2b3f501f8aea850386000321d6" Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.147919 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkxp9" Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.150899 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" event={"ID":"99f5ed26-9619-41e0-95e1-dad0e9e78fc9","Type":"ContainerStarted","Data":"eb4ad9c1638aad84ddf5940201ad40c308dfc3fa4651481baa431b2cf393f303"} Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.181131 4917 scope.go:117] "RemoveContainer" containerID="32dc4fac8cc3515c02e89f00ba0c4b32c813055c569ef31618c6b87111e3d227" Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.215730 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kkxp9"] Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.219079 4917 scope.go:117] "RemoveContainer" containerID="b368a9c73087873b30f4805dfe910e882ba885af42cee123a3d99eeec9427bb3" Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.223042 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kkxp9"] Dec 12 00:41:57 crc kubenswrapper[4917]: I1212 00:41:57.612605 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" path="/var/lib/kubelet/pods/9ae6d5ae-1bec-4e57-9048-147ffea5cf39/volumes" Dec 12 00:41:59 crc kubenswrapper[4917]: I1212 00:41:59.184414 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" event={"ID":"99f5ed26-9619-41e0-95e1-dad0e9e78fc9","Type":"ContainerStarted","Data":"a994a4a989727ce3b0100346c0c640fcdc4c1387ebe35e45424bd86298347b26"} Dec 12 00:41:59 crc kubenswrapper[4917]: I1212 00:41:59.205992 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:59 crc kubenswrapper[4917]: I1212 00:41:59.206447 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:59 crc kubenswrapper[4917]: I1212 00:41:59.216314 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" podStartSLOduration=3.184935124 podStartE2EDuration="39.216274519s" podCreationTimestamp="2025-12-12 00:41:20 +0000 UTC" firstStartedPulling="2025-12-12 00:41:22.147912831 +0000 UTC m=+2116.925713644" lastFinishedPulling="2025-12-12 00:41:58.179252226 +0000 UTC m=+2152.957053039" observedRunningTime="2025-12-12 00:41:59.210352211 +0000 UTC m=+2153.988153034" watchObservedRunningTime="2025-12-12 00:41:59.216274519 +0000 UTC m=+2153.994075332" Dec 12 00:41:59 crc kubenswrapper[4917]: I1212 00:41:59.264927 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:41:59 crc kubenswrapper[4917]: I1212 00:41:59.639807 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:41:59 crc kubenswrapper[4917]: I1212 00:41:59.640152 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:41:59 crc kubenswrapper[4917]: I1212 00:41:59.640209 4917 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:41:59 crc kubenswrapper[4917]: I1212 00:41:59.641159 4917 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ae58f5b179e6c967cab3b964e831e89319949df0b3e0b977568f62d47cf198d"} pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 00:41:59 crc kubenswrapper[4917]: I1212 00:41:59.641246 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" containerID="cri-o://9ae58f5b179e6c967cab3b964e831e89319949df0b3e0b977568f62d47cf198d" gracePeriod=600 Dec 12 00:42:00 crc kubenswrapper[4917]: I1212 00:42:00.196971 4917 generic.go:334] "Generic (PLEG): container finished" podID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerID="9ae58f5b179e6c967cab3b964e831e89319949df0b3e0b977568f62d47cf198d" exitCode=0 Dec 12 00:42:00 crc kubenswrapper[4917]: I1212 00:42:00.197055 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerDied","Data":"9ae58f5b179e6c967cab3b964e831e89319949df0b3e0b977568f62d47cf198d"} Dec 12 00:42:00 crc kubenswrapper[4917]: I1212 00:42:00.198496 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff"} Dec 12 00:42:00 crc kubenswrapper[4917]: I1212 00:42:00.199063 4917 scope.go:117] "RemoveContainer" containerID="e1f0d9ae072eb6679920ab4c5fb503ea4cc7d90e22ea08be856092883978a542" Dec 12 00:42:00 crc kubenswrapper[4917]: I1212 00:42:00.246984 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:42:00 crc kubenswrapper[4917]: I1212 00:42:00.658386 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-pschj_58a31d61-50bc-4a00-9040-7ece16fa7c9d/prometheus-webhook-snmp/0.log" Dec 12 00:42:03 crc kubenswrapper[4917]: I1212 00:42:03.849373 4917 generic.go:334] "Generic (PLEG): container finished" podID="d8f03c62-f504-466a-87d3-15b358c3cd0e" containerID="6823d03a47152da97ea690d838872892b2bd0988a02d6b9da3e6f39a681175a2" exitCode=0 Dec 12 00:42:03 crc kubenswrapper[4917]: I1212 00:42:03.849487 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8xrd" event={"ID":"d8f03c62-f504-466a-87d3-15b358c3cd0e","Type":"ContainerDied","Data":"6823d03a47152da97ea690d838872892b2bd0988a02d6b9da3e6f39a681175a2"} Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.279096 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vjzd9"] Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.279729 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vjzd9" podUID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" containerName="registry-server" containerID="cri-o://3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c" gracePeriod=2 Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.677842 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.763494 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8c4r\" (UniqueName: \"kubernetes.io/projected/5d6c2620-5dea-4f33-8df1-3247e350d6cd-kube-api-access-t8c4r\") pod \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.763858 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-catalog-content\") pod \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.764187 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-utilities\") pod \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\" (UID: \"5d6c2620-5dea-4f33-8df1-3247e350d6cd\") " Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.764693 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-utilities" (OuterVolumeSpecName: "utilities") pod "5d6c2620-5dea-4f33-8df1-3247e350d6cd" (UID: "5d6c2620-5dea-4f33-8df1-3247e350d6cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.764947 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.770267 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d6c2620-5dea-4f33-8df1-3247e350d6cd-kube-api-access-t8c4r" (OuterVolumeSpecName: "kube-api-access-t8c4r") pod "5d6c2620-5dea-4f33-8df1-3247e350d6cd" (UID: "5d6c2620-5dea-4f33-8df1-3247e350d6cd"). InnerVolumeSpecName "kube-api-access-t8c4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.866319 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8c4r\" (UniqueName: \"kubernetes.io/projected/5d6c2620-5dea-4f33-8df1-3247e350d6cd-kube-api-access-t8c4r\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.869016 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8xrd" event={"ID":"d8f03c62-f504-466a-87d3-15b358c3cd0e","Type":"ContainerStarted","Data":"15d2e35ee2648306502dbbb265d15dea8729d9f5853aba156ccd4034059fe9a7"} Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.874614 4917 generic.go:334] "Generic (PLEG): container finished" podID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" containerID="3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c" exitCode=0 Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.874671 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjzd9" event={"ID":"5d6c2620-5dea-4f33-8df1-3247e350d6cd","Type":"ContainerDied","Data":"3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c"} Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.874930 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjzd9" event={"ID":"5d6c2620-5dea-4f33-8df1-3247e350d6cd","Type":"ContainerDied","Data":"ddecee0702661a42168d0829b260726ba2be2dd832f26e40e728544b107089ed"} Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.875016 4917 scope.go:117] "RemoveContainer" containerID="3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.875238 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vjzd9" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.885464 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d6c2620-5dea-4f33-8df1-3247e350d6cd" (UID: "5d6c2620-5dea-4f33-8df1-3247e350d6cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.899142 4917 scope.go:117] "RemoveContainer" containerID="64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.902074 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g8xrd" podStartSLOduration=6.592694001 podStartE2EDuration="13.902052595s" podCreationTimestamp="2025-12-12 00:41:51 +0000 UTC" firstStartedPulling="2025-12-12 00:41:57.146320061 +0000 UTC m=+2151.924120884" lastFinishedPulling="2025-12-12 00:42:04.455678625 +0000 UTC m=+2159.233479478" observedRunningTime="2025-12-12 00:42:04.89769212 +0000 UTC m=+2159.675492953" watchObservedRunningTime="2025-12-12 00:42:04.902052595 +0000 UTC m=+2159.679853428" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.923989 4917 scope.go:117] "RemoveContainer" containerID="ad2a215a2a5b319651eb60c2d6dff99a2c51c0f03b8f4fbfa61a72850b89689c" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.942987 4917 scope.go:117] "RemoveContainer" containerID="3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c" Dec 12 00:42:04 crc kubenswrapper[4917]: E1212 00:42:04.944206 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c\": container with ID starting with 3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c not found: ID does not exist" containerID="3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.944269 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c"} err="failed to get container status \"3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c\": rpc error: code = NotFound desc = could not find container \"3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c\": container with ID starting with 3ba1a7f691b700a2517ca6982cb5a5243f3d7b1303d7125757bd3dd71af7e63c not found: ID does not exist" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.944305 4917 scope.go:117] "RemoveContainer" containerID="64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967" Dec 12 00:42:04 crc kubenswrapper[4917]: E1212 00:42:04.944760 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967\": container with ID starting with 64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967 not found: ID does not exist" containerID="64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.944795 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967"} err="failed to get container status \"64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967\": rpc error: code = NotFound desc = could not find container \"64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967\": container with ID starting with 64aa41524ded94140ea268ecea27940613f36eb1b5e658c1194546b01608a967 not found: ID does not exist" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.944810 4917 scope.go:117] "RemoveContainer" containerID="ad2a215a2a5b319651eb60c2d6dff99a2c51c0f03b8f4fbfa61a72850b89689c" Dec 12 00:42:04 crc kubenswrapper[4917]: E1212 00:42:04.945064 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad2a215a2a5b319651eb60c2d6dff99a2c51c0f03b8f4fbfa61a72850b89689c\": container with ID starting with ad2a215a2a5b319651eb60c2d6dff99a2c51c0f03b8f4fbfa61a72850b89689c not found: ID does not exist" containerID="ad2a215a2a5b319651eb60c2d6dff99a2c51c0f03b8f4fbfa61a72850b89689c" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.945089 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad2a215a2a5b319651eb60c2d6dff99a2c51c0f03b8f4fbfa61a72850b89689c"} err="failed to get container status \"ad2a215a2a5b319651eb60c2d6dff99a2c51c0f03b8f4fbfa61a72850b89689c\": rpc error: code = NotFound desc = could not find container \"ad2a215a2a5b319651eb60c2d6dff99a2c51c0f03b8f4fbfa61a72850b89689c\": container with ID starting with ad2a215a2a5b319651eb60c2d6dff99a2c51c0f03b8f4fbfa61a72850b89689c not found: ID does not exist" Dec 12 00:42:04 crc kubenswrapper[4917]: I1212 00:42:04.968530 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6c2620-5dea-4f33-8df1-3247e350d6cd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:05 crc kubenswrapper[4917]: I1212 00:42:05.203781 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vjzd9"] Dec 12 00:42:05 crc kubenswrapper[4917]: I1212 00:42:05.209398 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vjzd9"] Dec 12 00:42:05 crc kubenswrapper[4917]: I1212 00:42:05.628360 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" path="/var/lib/kubelet/pods/5d6c2620-5dea-4f33-8df1-3247e350d6cd/volumes" Dec 12 00:42:11 crc kubenswrapper[4917]: I1212 00:42:11.633988 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:42:11 crc kubenswrapper[4917]: I1212 00:42:11.634684 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:42:11 crc kubenswrapper[4917]: I1212 00:42:11.703277 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:42:11 crc kubenswrapper[4917]: I1212 00:42:11.999553 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g8xrd" Dec 12 00:42:12 crc kubenswrapper[4917]: I1212 00:42:12.128020 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g8xrd"] Dec 12 00:42:12 crc kubenswrapper[4917]: I1212 00:42:12.679005 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hcbwr"] Dec 12 00:42:12 crc kubenswrapper[4917]: I1212 00:42:12.679386 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hcbwr" podUID="f4e49af0-4a50-42ce-af81-a397919c9df2" containerName="registry-server" containerID="cri-o://5f6eef5d872d53d52c6380cc6a1fe394f09b2a3e09a5f4413cb17658444979ba" gracePeriod=2 Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.074068 4917 generic.go:334] "Generic (PLEG): container finished" podID="f4e49af0-4a50-42ce-af81-a397919c9df2" containerID="5f6eef5d872d53d52c6380cc6a1fe394f09b2a3e09a5f4413cb17658444979ba" exitCode=0 Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.076119 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbwr" event={"ID":"f4e49af0-4a50-42ce-af81-a397919c9df2","Type":"ContainerDied","Data":"5f6eef5d872d53d52c6380cc6a1fe394f09b2a3e09a5f4413cb17658444979ba"} Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.473129 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.582827 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-catalog-content\") pod \"f4e49af0-4a50-42ce-af81-a397919c9df2\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.583183 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-utilities\") pod \"f4e49af0-4a50-42ce-af81-a397919c9df2\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.583238 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x7h6\" (UniqueName: \"kubernetes.io/projected/f4e49af0-4a50-42ce-af81-a397919c9df2-kube-api-access-8x7h6\") pod \"f4e49af0-4a50-42ce-af81-a397919c9df2\" (UID: \"f4e49af0-4a50-42ce-af81-a397919c9df2\") " Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.583866 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-utilities" (OuterVolumeSpecName: "utilities") pod "f4e49af0-4a50-42ce-af81-a397919c9df2" (UID: "f4e49af0-4a50-42ce-af81-a397919c9df2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.584122 4917 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.591904 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e49af0-4a50-42ce-af81-a397919c9df2-kube-api-access-8x7h6" (OuterVolumeSpecName: "kube-api-access-8x7h6") pod "f4e49af0-4a50-42ce-af81-a397919c9df2" (UID: "f4e49af0-4a50-42ce-af81-a397919c9df2"). InnerVolumeSpecName "kube-api-access-8x7h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.638677 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4e49af0-4a50-42ce-af81-a397919c9df2" (UID: "f4e49af0-4a50-42ce-af81-a397919c9df2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.701477 4917 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e49af0-4a50-42ce-af81-a397919c9df2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:13 crc kubenswrapper[4917]: I1212 00:42:13.701516 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x7h6\" (UniqueName: \"kubernetes.io/projected/f4e49af0-4a50-42ce-af81-a397919c9df2-kube-api-access-8x7h6\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:14 crc kubenswrapper[4917]: I1212 00:42:14.084139 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcbwr" event={"ID":"f4e49af0-4a50-42ce-af81-a397919c9df2","Type":"ContainerDied","Data":"2ec4a11ad97a4be30c59fe57ffe3f6c1a84095225c60ab75a209b6825ab88ad0"} Dec 12 00:42:14 crc kubenswrapper[4917]: I1212 00:42:14.084187 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcbwr" Dec 12 00:42:14 crc kubenswrapper[4917]: I1212 00:42:14.084219 4917 scope.go:117] "RemoveContainer" containerID="5f6eef5d872d53d52c6380cc6a1fe394f09b2a3e09a5f4413cb17658444979ba" Dec 12 00:42:14 crc kubenswrapper[4917]: I1212 00:42:14.110944 4917 scope.go:117] "RemoveContainer" containerID="3cd36e080ec093f66f1a44d8d6b13cd57d5449e85daeb7cdda2ccb4c67092b1a" Dec 12 00:42:14 crc kubenswrapper[4917]: I1212 00:42:14.117291 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hcbwr"] Dec 12 00:42:14 crc kubenswrapper[4917]: I1212 00:42:14.124134 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hcbwr"] Dec 12 00:42:14 crc kubenswrapper[4917]: I1212 00:42:14.133336 4917 scope.go:117] "RemoveContainer" containerID="b18553363f574cc9e9969ffff8b34647b0cd485403c39710d6c3257864b4368d" Dec 12 00:42:15 crc kubenswrapper[4917]: I1212 00:42:15.609082 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e49af0-4a50-42ce-af81-a397919c9df2" path="/var/lib/kubelet/pods/f4e49af0-4a50-42ce-af81-a397919c9df2/volumes" Dec 12 00:42:18 crc kubenswrapper[4917]: I1212 00:42:18.754093 4917 scope.go:117] "RemoveContainer" containerID="7a50badbdd6e0dd26c0ca7e923491d00d172eb795018c44a10e3e14dd7e9d5be" Dec 12 00:42:18 crc kubenswrapper[4917]: I1212 00:42:18.802531 4917 scope.go:117] "RemoveContainer" containerID="638c8f47fc7adaba5f1c14bdfa4a9c3cc04e3b4302fac4a43ac2440a55977899" Dec 12 00:42:18 crc kubenswrapper[4917]: I1212 00:42:18.838832 4917 scope.go:117] "RemoveContainer" containerID="9c84d9666a82fd1f067fcff224c7d2f650cb1259b956147075a83c4cf45b3739" Dec 12 00:42:18 crc kubenswrapper[4917]: I1212 00:42:18.891139 4917 scope.go:117] "RemoveContainer" containerID="06a3dd36bd2e19c807dc246055ef56762325aed48add0d4b233bcfd1f3b28099" Dec 12 00:42:28 crc kubenswrapper[4917]: I1212 00:42:28.763074 4917 generic.go:334] "Generic (PLEG): container finished" podID="99f5ed26-9619-41e0-95e1-dad0e9e78fc9" containerID="eb4ad9c1638aad84ddf5940201ad40c308dfc3fa4651481baa431b2cf393f303" exitCode=0 Dec 12 00:42:28 crc kubenswrapper[4917]: I1212 00:42:28.763143 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" event={"ID":"99f5ed26-9619-41e0-95e1-dad0e9e78fc9","Type":"ContainerDied","Data":"eb4ad9c1638aad84ddf5940201ad40c308dfc3fa4651481baa431b2cf393f303"} Dec 12 00:42:28 crc kubenswrapper[4917]: I1212 00:42:28.764931 4917 scope.go:117] "RemoveContainer" containerID="eb4ad9c1638aad84ddf5940201ad40c308dfc3fa4651481baa431b2cf393f303" Dec 12 00:42:32 crc kubenswrapper[4917]: I1212 00:42:32.967361 4917 generic.go:334] "Generic (PLEG): container finished" podID="99f5ed26-9619-41e0-95e1-dad0e9e78fc9" containerID="a994a4a989727ce3b0100346c0c640fcdc4c1387ebe35e45424bd86298347b26" exitCode=0 Dec 12 00:42:32 crc kubenswrapper[4917]: I1212 00:42:32.967486 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" event={"ID":"99f5ed26-9619-41e0-95e1-dad0e9e78fc9","Type":"ContainerDied","Data":"a994a4a989727ce3b0100346c0c640fcdc4c1387ebe35e45424bd86298347b26"} Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.324387 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.498799 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwdx6\" (UniqueName: \"kubernetes.io/projected/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-kube-api-access-xwdx6\") pod \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.498916 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-healthcheck-log\") pod \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.498942 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-config\") pod \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.498981 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-sensubility-config\") pod \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.499042 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-entrypoint-script\") pod \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.499126 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-publisher\") pod \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.499154 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-entrypoint-script\") pod \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\" (UID: \"99f5ed26-9619-41e0-95e1-dad0e9e78fc9\") " Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.516261 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-kube-api-access-xwdx6" (OuterVolumeSpecName: "kube-api-access-xwdx6") pod "99f5ed26-9619-41e0-95e1-dad0e9e78fc9" (UID: "99f5ed26-9619-41e0-95e1-dad0e9e78fc9"). InnerVolumeSpecName "kube-api-access-xwdx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.519561 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "99f5ed26-9619-41e0-95e1-dad0e9e78fc9" (UID: "99f5ed26-9619-41e0-95e1-dad0e9e78fc9"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.519934 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "99f5ed26-9619-41e0-95e1-dad0e9e78fc9" (UID: "99f5ed26-9619-41e0-95e1-dad0e9e78fc9"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.520129 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "99f5ed26-9619-41e0-95e1-dad0e9e78fc9" (UID: "99f5ed26-9619-41e0-95e1-dad0e9e78fc9"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.522850 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "99f5ed26-9619-41e0-95e1-dad0e9e78fc9" (UID: "99f5ed26-9619-41e0-95e1-dad0e9e78fc9"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.527409 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "99f5ed26-9619-41e0-95e1-dad0e9e78fc9" (UID: "99f5ed26-9619-41e0-95e1-dad0e9e78fc9"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.532802 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "99f5ed26-9619-41e0-95e1-dad0e9e78fc9" (UID: "99f5ed26-9619-41e0-95e1-dad0e9e78fc9"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.600964 4917 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.601385 4917 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.601509 4917 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.601593 4917 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.601683 4917 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.601765 4917 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.601874 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwdx6\" (UniqueName: \"kubernetes.io/projected/99f5ed26-9619-41e0-95e1-dad0e9e78fc9-kube-api-access-xwdx6\") on node \"crc\" DevicePath \"\"" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.990530 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" event={"ID":"99f5ed26-9619-41e0-95e1-dad0e9e78fc9","Type":"ContainerDied","Data":"f66a3d1d7bdc714dd767af57830d67f2665b7637c93b320d11673cfb45bcefd2"} Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.990605 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f66a3d1d7bdc714dd767af57830d67f2665b7637c93b320d11673cfb45bcefd2" Dec 12 00:42:34 crc kubenswrapper[4917]: I1212 00:42:34.990635 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-p6jrh" Dec 12 00:42:36 crc kubenswrapper[4917]: I1212 00:42:36.517735 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-p6jrh_99f5ed26-9619-41e0-95e1-dad0e9e78fc9/smoketest-collectd/0.log" Dec 12 00:42:36 crc kubenswrapper[4917]: I1212 00:42:36.816229 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-p6jrh_99f5ed26-9619-41e0-95e1-dad0e9e78fc9/smoketest-ceilometer/0.log" Dec 12 00:42:37 crc kubenswrapper[4917]: I1212 00:42:37.421517 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-7cxmg_1600217d-e49f-4aa7-8be8-6dff2c94407b/default-interconnect/0.log" Dec 12 00:42:37 crc kubenswrapper[4917]: I1212 00:42:37.690559 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74_1a020119-1f66-4f57-be67-c2a2b91afda1/bridge/2.log" Dec 12 00:42:38 crc kubenswrapper[4917]: I1212 00:42:38.010857 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-swb74_1a020119-1f66-4f57-be67-c2a2b91afda1/sg-core/0.log" Dec 12 00:42:38 crc kubenswrapper[4917]: I1212 00:42:38.331553 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc_1455d428-63a6-4c87-8a1d-958b4b5c1870/bridge/2.log" Dec 12 00:42:38 crc kubenswrapper[4917]: I1212 00:42:38.632855 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-67f7fd5f6b-k2rkc_1455d428-63a6-4c87-8a1d-958b4b5c1870/sg-core/0.log" Dec 12 00:42:38 crc kubenswrapper[4917]: I1212 00:42:38.979883 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v_339269b5-9c82-4a6c-83c8-02f76531493c/bridge/2.log" Dec 12 00:42:39 crc kubenswrapper[4917]: I1212 00:42:39.289476 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-29q8v_339269b5-9c82-4a6c-83c8-02f76531493c/sg-core/0.log" Dec 12 00:42:39 crc kubenswrapper[4917]: I1212 00:42:39.620079 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7_7c2fa68d-9bd5-4d49-8733-79eada3821f0/bridge/2.log" Dec 12 00:42:39 crc kubenswrapper[4917]: I1212 00:42:39.881439 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-8589c7fb8-hjpf7_7c2fa68d-9bd5-4d49-8733-79eada3821f0/sg-core/0.log" Dec 12 00:42:40 crc kubenswrapper[4917]: I1212 00:42:40.153887 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7_fe8e080e-f50f-4d62-b8bf-db02d45c9dd9/bridge/2.log" Dec 12 00:42:40 crc kubenswrapper[4917]: I1212 00:42:40.470763 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-rdpb7_fe8e080e-f50f-4d62-b8bf-db02d45c9dd9/sg-core/0.log" Dec 12 00:42:44 crc kubenswrapper[4917]: I1212 00:42:44.131597 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-5bdd75688b-nlpwg_fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6/operator/0.log" Dec 12 00:42:44 crc kubenswrapper[4917]: I1212 00:42:44.402930 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_460a63ae-ba23-445f-afd3-c9dde4d8a411/prometheus/0.log" Dec 12 00:42:44 crc kubenswrapper[4917]: I1212 00:42:44.738388 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_6bda0b57-4afb-46a0-a754-2efd6aaa2a95/elasticsearch/0.log" Dec 12 00:42:44 crc kubenswrapper[4917]: I1212 00:42:44.984672 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-pschj_58a31d61-50bc-4a00-9040-7ece16fa7c9d/prometheus-webhook-snmp/0.log" Dec 12 00:42:45 crc kubenswrapper[4917]: I1212 00:42:45.335604 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_337f1b5b-cd54-4c3e-98ef-2c29e8019998/alertmanager/0.log" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.517513 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-btkwc"] Dec 12 00:42:54 crc kubenswrapper[4917]: E1212 00:42:54.518732 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" containerName="extract-utilities" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.518762 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" containerName="extract-utilities" Dec 12 00:42:54 crc kubenswrapper[4917]: E1212 00:42:54.518775 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e49af0-4a50-42ce-af81-a397919c9df2" containerName="extract-content" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.518781 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e49af0-4a50-42ce-af81-a397919c9df2" containerName="extract-content" Dec 12 00:42:54 crc kubenswrapper[4917]: E1212 00:42:54.518792 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e49af0-4a50-42ce-af81-a397919c9df2" containerName="registry-server" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.518798 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e49af0-4a50-42ce-af81-a397919c9df2" containerName="registry-server" Dec 12 00:42:54 crc kubenswrapper[4917]: E1212 00:42:54.518807 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerName="registry-server" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.518812 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerName="registry-server" Dec 12 00:42:54 crc kubenswrapper[4917]: E1212 00:42:54.518822 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerName="extract-utilities" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.518827 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerName="extract-utilities" Dec 12 00:42:54 crc kubenswrapper[4917]: E1212 00:42:54.518838 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" containerName="extract-content" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.518843 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" containerName="extract-content" Dec 12 00:42:54 crc kubenswrapper[4917]: E1212 00:42:54.518853 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f5ed26-9619-41e0-95e1-dad0e9e78fc9" containerName="smoketest-collectd" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.518859 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f5ed26-9619-41e0-95e1-dad0e9e78fc9" containerName="smoketest-collectd" Dec 12 00:42:54 crc kubenswrapper[4917]: E1212 00:42:54.518867 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerName="extract-content" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.518874 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerName="extract-content" Dec 12 00:42:54 crc kubenswrapper[4917]: E1212 00:42:54.518883 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e49af0-4a50-42ce-af81-a397919c9df2" containerName="extract-utilities" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.518889 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e49af0-4a50-42ce-af81-a397919c9df2" containerName="extract-utilities" Dec 12 00:42:54 crc kubenswrapper[4917]: E1212 00:42:54.518903 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f5ed26-9619-41e0-95e1-dad0e9e78fc9" containerName="smoketest-ceilometer" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.518910 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f5ed26-9619-41e0-95e1-dad0e9e78fc9" containerName="smoketest-ceilometer" Dec 12 00:42:54 crc kubenswrapper[4917]: E1212 00:42:54.518921 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" containerName="registry-server" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.518930 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" containerName="registry-server" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.519106 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f5ed26-9619-41e0-95e1-dad0e9e78fc9" containerName="smoketest-ceilometer" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.519119 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae6d5ae-1bec-4e57-9048-147ffea5cf39" containerName="registry-server" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.519127 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f5ed26-9619-41e0-95e1-dad0e9e78fc9" containerName="smoketest-collectd" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.519137 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d6c2620-5dea-4f33-8df1-3247e350d6cd" containerName="registry-server" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.519153 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e49af0-4a50-42ce-af81-a397919c9df2" containerName="registry-server" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.519908 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-btkwc" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.533383 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-btkwc"] Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.662836 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2cpl\" (UniqueName: \"kubernetes.io/projected/5d9f8af1-777a-40f2-a025-f3f5b4888d57-kube-api-access-s2cpl\") pod \"service-telemetry-framework-operators-btkwc\" (UID: \"5d9f8af1-777a-40f2-a025-f3f5b4888d57\") " pod="service-telemetry/service-telemetry-framework-operators-btkwc" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.765058 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2cpl\" (UniqueName: \"kubernetes.io/projected/5d9f8af1-777a-40f2-a025-f3f5b4888d57-kube-api-access-s2cpl\") pod \"service-telemetry-framework-operators-btkwc\" (UID: \"5d9f8af1-777a-40f2-a025-f3f5b4888d57\") " pod="service-telemetry/service-telemetry-framework-operators-btkwc" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.801242 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2cpl\" (UniqueName: \"kubernetes.io/projected/5d9f8af1-777a-40f2-a025-f3f5b4888d57-kube-api-access-s2cpl\") pod \"service-telemetry-framework-operators-btkwc\" (UID: \"5d9f8af1-777a-40f2-a025-f3f5b4888d57\") " pod="service-telemetry/service-telemetry-framework-operators-btkwc" Dec 12 00:42:54 crc kubenswrapper[4917]: I1212 00:42:54.849162 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-btkwc" Dec 12 00:42:55 crc kubenswrapper[4917]: I1212 00:42:55.134583 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-btkwc"] Dec 12 00:42:55 crc kubenswrapper[4917]: I1212 00:42:55.149859 4917 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 00:42:55 crc kubenswrapper[4917]: I1212 00:42:55.407729 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-btkwc" event={"ID":"5d9f8af1-777a-40f2-a025-f3f5b4888d57","Type":"ContainerStarted","Data":"a4a9450fb40e293476628420d94ce743dd5b425095860fe09565029985b5e444"} Dec 12 00:42:56 crc kubenswrapper[4917]: I1212 00:42:56.419249 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-btkwc" event={"ID":"5d9f8af1-777a-40f2-a025-f3f5b4888d57","Type":"ContainerStarted","Data":"48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468"} Dec 12 00:42:56 crc kubenswrapper[4917]: I1212 00:42:56.439241 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-operators-btkwc" podStartSLOduration=1.922172807 podStartE2EDuration="2.439196704s" podCreationTimestamp="2025-12-12 00:42:54 +0000 UTC" firstStartedPulling="2025-12-12 00:42:55.149576789 +0000 UTC m=+2209.927377602" lastFinishedPulling="2025-12-12 00:42:55.666600686 +0000 UTC m=+2210.444401499" observedRunningTime="2025-12-12 00:42:56.437504319 +0000 UTC m=+2211.215305132" watchObservedRunningTime="2025-12-12 00:42:56.439196704 +0000 UTC m=+2211.216997527" Dec 12 00:43:01 crc kubenswrapper[4917]: I1212 00:43:01.447933 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-68478559c6-r5bf2_139ada12-b9b2-4a1b-acfa-2bf1b51fb12c/operator/0.log" Dec 12 00:43:04 crc kubenswrapper[4917]: I1212 00:43:04.623531 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-5bdd75688b-nlpwg_fc25fa44-50c8-45f3-91fb-3cb85ab2dbf6/operator/0.log" Dec 12 00:43:04 crc kubenswrapper[4917]: I1212 00:43:04.850374 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/service-telemetry-framework-operators-btkwc" Dec 12 00:43:04 crc kubenswrapper[4917]: I1212 00:43:04.850797 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/service-telemetry-framework-operators-btkwc" Dec 12 00:43:04 crc kubenswrapper[4917]: I1212 00:43:04.882602 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/service-telemetry-framework-operators-btkwc" Dec 12 00:43:04 crc kubenswrapper[4917]: I1212 00:43:04.945341 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_1bc83425-dbad-4b59-8622-1627e4e724f8/qdr/0.log" Dec 12 00:43:05 crc kubenswrapper[4917]: I1212 00:43:05.792012 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/service-telemetry-framework-operators-btkwc" Dec 12 00:43:07 crc kubenswrapper[4917]: I1212 00:43:07.283110 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-btkwc"] Dec 12 00:43:08 crc kubenswrapper[4917]: I1212 00:43:08.785891 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-operators-btkwc" podUID="5d9f8af1-777a-40f2-a025-f3f5b4888d57" containerName="registry-server" containerID="cri-o://48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468" gracePeriod=2 Dec 12 00:43:08 crc kubenswrapper[4917]: E1212 00:43:08.954945 4917 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d9f8af1_777a_40f2_a025_f3f5b4888d57.slice/crio-48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468.scope\": RecentStats: unable to find data in memory cache]" Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.652844 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-btkwc" Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.760032 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2cpl\" (UniqueName: \"kubernetes.io/projected/5d9f8af1-777a-40f2-a025-f3f5b4888d57-kube-api-access-s2cpl\") pod \"5d9f8af1-777a-40f2-a025-f3f5b4888d57\" (UID: \"5d9f8af1-777a-40f2-a025-f3f5b4888d57\") " Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.767033 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d9f8af1-777a-40f2-a025-f3f5b4888d57-kube-api-access-s2cpl" (OuterVolumeSpecName: "kube-api-access-s2cpl") pod "5d9f8af1-777a-40f2-a025-f3f5b4888d57" (UID: "5d9f8af1-777a-40f2-a025-f3f5b4888d57"). InnerVolumeSpecName "kube-api-access-s2cpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.794790 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-btkwc" Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.794815 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-btkwc" event={"ID":"5d9f8af1-777a-40f2-a025-f3f5b4888d57","Type":"ContainerDied","Data":"48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468"} Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.794878 4917 scope.go:117] "RemoveContainer" containerID="48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468" Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.794794 4917 generic.go:334] "Generic (PLEG): container finished" podID="5d9f8af1-777a-40f2-a025-f3f5b4888d57" containerID="48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468" exitCode=0 Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.794936 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-btkwc" event={"ID":"5d9f8af1-777a-40f2-a025-f3f5b4888d57","Type":"ContainerDied","Data":"a4a9450fb40e293476628420d94ce743dd5b425095860fe09565029985b5e444"} Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.814892 4917 scope.go:117] "RemoveContainer" containerID="48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468" Dec 12 00:43:09 crc kubenswrapper[4917]: E1212 00:43:09.815508 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468\": container with ID starting with 48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468 not found: ID does not exist" containerID="48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468" Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.815558 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468"} err="failed to get container status \"48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468\": rpc error: code = NotFound desc = could not find container \"48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468\": container with ID starting with 48a5d873ff3e5eb808a70ed8b8cbfc8784eae97045e10343722a17f826c2f468 not found: ID does not exist" Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.839345 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-btkwc"] Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.844481 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-btkwc"] Dec 12 00:43:09 crc kubenswrapper[4917]: I1212 00:43:09.861865 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2cpl\" (UniqueName: \"kubernetes.io/projected/5d9f8af1-777a-40f2-a025-f3f5b4888d57-kube-api-access-s2cpl\") on node \"crc\" DevicePath \"\"" Dec 12 00:43:11 crc kubenswrapper[4917]: I1212 00:43:11.618623 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d9f8af1-777a-40f2-a025-f3f5b4888d57" path="/var/lib/kubelet/pods/5d9f8af1-777a-40f2-a025-f3f5b4888d57/volumes" Dec 12 00:43:40 crc kubenswrapper[4917]: I1212 00:43:40.874041 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v4f2q/must-gather-zljpq"] Dec 12 00:43:40 crc kubenswrapper[4917]: E1212 00:43:40.875001 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d9f8af1-777a-40f2-a025-f3f5b4888d57" containerName="registry-server" Dec 12 00:43:40 crc kubenswrapper[4917]: I1212 00:43:40.875017 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d9f8af1-777a-40f2-a025-f3f5b4888d57" containerName="registry-server" Dec 12 00:43:40 crc kubenswrapper[4917]: I1212 00:43:40.875192 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d9f8af1-777a-40f2-a025-f3f5b4888d57" containerName="registry-server" Dec 12 00:43:40 crc kubenswrapper[4917]: I1212 00:43:40.876049 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4f2q/must-gather-zljpq" Dec 12 00:43:40 crc kubenswrapper[4917]: I1212 00:43:40.882227 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v4f2q"/"default-dockercfg-m9hv2" Dec 12 00:43:40 crc kubenswrapper[4917]: I1212 00:43:40.882674 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v4f2q"/"kube-root-ca.crt" Dec 12 00:43:40 crc kubenswrapper[4917]: I1212 00:43:40.883759 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v4f2q"/"openshift-service-ca.crt" Dec 12 00:43:40 crc kubenswrapper[4917]: I1212 00:43:40.902081 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v4f2q/must-gather-zljpq"] Dec 12 00:43:41 crc kubenswrapper[4917]: I1212 00:43:41.017489 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15e5f551-1e94-4e9f-92b0-83c27066271d-must-gather-output\") pod \"must-gather-zljpq\" (UID: \"15e5f551-1e94-4e9f-92b0-83c27066271d\") " pod="openshift-must-gather-v4f2q/must-gather-zljpq" Dec 12 00:43:41 crc kubenswrapper[4917]: I1212 00:43:41.017559 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7899b\" (UniqueName: \"kubernetes.io/projected/15e5f551-1e94-4e9f-92b0-83c27066271d-kube-api-access-7899b\") pod \"must-gather-zljpq\" (UID: \"15e5f551-1e94-4e9f-92b0-83c27066271d\") " pod="openshift-must-gather-v4f2q/must-gather-zljpq" Dec 12 00:43:41 crc kubenswrapper[4917]: I1212 00:43:41.119601 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7899b\" (UniqueName: \"kubernetes.io/projected/15e5f551-1e94-4e9f-92b0-83c27066271d-kube-api-access-7899b\") pod \"must-gather-zljpq\" (UID: \"15e5f551-1e94-4e9f-92b0-83c27066271d\") " pod="openshift-must-gather-v4f2q/must-gather-zljpq" Dec 12 00:43:41 crc kubenswrapper[4917]: I1212 00:43:41.119838 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15e5f551-1e94-4e9f-92b0-83c27066271d-must-gather-output\") pod \"must-gather-zljpq\" (UID: \"15e5f551-1e94-4e9f-92b0-83c27066271d\") " pod="openshift-must-gather-v4f2q/must-gather-zljpq" Dec 12 00:43:41 crc kubenswrapper[4917]: I1212 00:43:41.120338 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15e5f551-1e94-4e9f-92b0-83c27066271d-must-gather-output\") pod \"must-gather-zljpq\" (UID: \"15e5f551-1e94-4e9f-92b0-83c27066271d\") " pod="openshift-must-gather-v4f2q/must-gather-zljpq" Dec 12 00:43:41 crc kubenswrapper[4917]: I1212 00:43:41.141610 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7899b\" (UniqueName: \"kubernetes.io/projected/15e5f551-1e94-4e9f-92b0-83c27066271d-kube-api-access-7899b\") pod \"must-gather-zljpq\" (UID: \"15e5f551-1e94-4e9f-92b0-83c27066271d\") " pod="openshift-must-gather-v4f2q/must-gather-zljpq" Dec 12 00:43:41 crc kubenswrapper[4917]: I1212 00:43:41.203759 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4f2q/must-gather-zljpq" Dec 12 00:43:41 crc kubenswrapper[4917]: I1212 00:43:41.840860 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v4f2q/must-gather-zljpq"] Dec 12 00:43:42 crc kubenswrapper[4917]: I1212 00:43:42.079459 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4f2q/must-gather-zljpq" event={"ID":"15e5f551-1e94-4e9f-92b0-83c27066271d","Type":"ContainerStarted","Data":"c3049141908f787835d250c64868fd98e0d5822accab8a36dc8895557c69b66a"} Dec 12 00:43:55 crc kubenswrapper[4917]: I1212 00:43:55.257660 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4f2q/must-gather-zljpq" event={"ID":"15e5f551-1e94-4e9f-92b0-83c27066271d","Type":"ContainerStarted","Data":"fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880"} Dec 12 00:43:55 crc kubenswrapper[4917]: I1212 00:43:55.258527 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4f2q/must-gather-zljpq" event={"ID":"15e5f551-1e94-4e9f-92b0-83c27066271d","Type":"ContainerStarted","Data":"e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1"} Dec 12 00:43:55 crc kubenswrapper[4917]: I1212 00:43:55.282412 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v4f2q/must-gather-zljpq" podStartSLOduration=2.6838598620000003 podStartE2EDuration="15.282393926s" podCreationTimestamp="2025-12-12 00:43:40 +0000 UTC" firstStartedPulling="2025-12-12 00:43:41.843591726 +0000 UTC m=+2256.621392539" lastFinishedPulling="2025-12-12 00:43:54.44212579 +0000 UTC m=+2269.219926603" observedRunningTime="2025-12-12 00:43:55.280078675 +0000 UTC m=+2270.057879508" watchObservedRunningTime="2025-12-12 00:43:55.282393926 +0000 UTC m=+2270.060194739" Dec 12 00:43:59 crc kubenswrapper[4917]: I1212 00:43:59.639839 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:43:59 crc kubenswrapper[4917]: I1212 00:43:59.640238 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:44:29 crc kubenswrapper[4917]: I1212 00:44:29.639275 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:44:29 crc kubenswrapper[4917]: I1212 00:44:29.639786 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:44:36 crc kubenswrapper[4917]: I1212 00:44:36.924204 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5tn6z_9ba57102-f6a5-41ef-a83c-951795076ab5/control-plane-machine-set-operator/0.log" Dec 12 00:44:37 crc kubenswrapper[4917]: I1212 00:44:37.060335 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-84jtz_b06b8a36-c12a-4604-a017-277d9a6a18ff/kube-rbac-proxy/0.log" Dec 12 00:44:37 crc kubenswrapper[4917]: I1212 00:44:37.142098 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-84jtz_b06b8a36-c12a-4604-a017-277d9a6a18ff/machine-api-operator/0.log" Dec 12 00:44:48 crc kubenswrapper[4917]: I1212 00:44:48.987722 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-kxvnb_659b4365-0d0c-412c-b1a2-1712298ec896/cert-manager-controller/0.log" Dec 12 00:44:49 crc kubenswrapper[4917]: I1212 00:44:49.163710 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-2tfk2_5772c7d5-7528-4639-9885-ac71feeed2fa/cert-manager-cainjector/0.log" Dec 12 00:44:49 crc kubenswrapper[4917]: I1212 00:44:49.236795 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-nvcnr_6c7107a1-63ed-46dd-b721-8eccfd351246/cert-manager-webhook/0.log" Dec 12 00:44:59 crc kubenswrapper[4917]: I1212 00:44:59.639378 4917 patch_prober.go:28] interesting pod/machine-config-daemon-ktvtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 00:44:59 crc kubenswrapper[4917]: I1212 00:44:59.640102 4917 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 00:44:59 crc kubenswrapper[4917]: I1212 00:44:59.640167 4917 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" Dec 12 00:44:59 crc kubenswrapper[4917]: I1212 00:44:59.641148 4917 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff"} pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 00:44:59 crc kubenswrapper[4917]: I1212 00:44:59.641254 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerName="machine-config-daemon" containerID="cri-o://c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" gracePeriod=600 Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.153133 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k"] Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.154512 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.156988 4917 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.157105 4917 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.163714 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k"] Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.263168 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4da950da-6370-4ac4-b4bc-5bff44b376f2-secret-volume\") pod \"collect-profiles-29425005-ll94k\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.263222 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ljjj\" (UniqueName: \"kubernetes.io/projected/4da950da-6370-4ac4-b4bc-5bff44b376f2-kube-api-access-7ljjj\") pod \"collect-profiles-29425005-ll94k\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.263263 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4da950da-6370-4ac4-b4bc-5bff44b376f2-config-volume\") pod \"collect-profiles-29425005-ll94k\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.364475 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4da950da-6370-4ac4-b4bc-5bff44b376f2-secret-volume\") pod \"collect-profiles-29425005-ll94k\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.364527 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ljjj\" (UniqueName: \"kubernetes.io/projected/4da950da-6370-4ac4-b4bc-5bff44b376f2-kube-api-access-7ljjj\") pod \"collect-profiles-29425005-ll94k\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.364571 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4da950da-6370-4ac4-b4bc-5bff44b376f2-config-volume\") pod \"collect-profiles-29425005-ll94k\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.365514 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4da950da-6370-4ac4-b4bc-5bff44b376f2-config-volume\") pod \"collect-profiles-29425005-ll94k\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.370419 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4da950da-6370-4ac4-b4bc-5bff44b376f2-secret-volume\") pod \"collect-profiles-29425005-ll94k\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.381214 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ljjj\" (UniqueName: \"kubernetes.io/projected/4da950da-6370-4ac4-b4bc-5bff44b376f2-kube-api-access-7ljjj\") pod \"collect-profiles-29425005-ll94k\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.472905 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:00 crc kubenswrapper[4917]: E1212 00:45:00.580033 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.910971 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k"] Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.978823 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" event={"ID":"4da950da-6370-4ac4-b4bc-5bff44b376f2","Type":"ContainerStarted","Data":"f3c1a3dde727cea472526ec25d0382171cd85dcbb62d508c29d3985ed77e8648"} Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.980980 4917 generic.go:334] "Generic (PLEG): container finished" podID="8bddbc3a-d8cc-4766-80d3-92562e840be5" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" exitCode=0 Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.981041 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerDied","Data":"c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff"} Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.981088 4917 scope.go:117] "RemoveContainer" containerID="9ae58f5b179e6c967cab3b964e831e89319949df0b3e0b977568f62d47cf198d" Dec 12 00:45:00 crc kubenswrapper[4917]: I1212 00:45:00.981688 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:45:00 crc kubenswrapper[4917]: E1212 00:45:00.982043 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:45:01 crc kubenswrapper[4917]: I1212 00:45:01.994420 4917 generic.go:334] "Generic (PLEG): container finished" podID="4da950da-6370-4ac4-b4bc-5bff44b376f2" containerID="14e7639ee70e3c63c8f424886ff6686e6ca2281023e3b249c3c3e6ff91d0195e" exitCode=0 Dec 12 00:45:01 crc kubenswrapper[4917]: I1212 00:45:01.994769 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" event={"ID":"4da950da-6370-4ac4-b4bc-5bff44b376f2","Type":"ContainerDied","Data":"14e7639ee70e3c63c8f424886ff6686e6ca2281023e3b249c3c3e6ff91d0195e"} Dec 12 00:45:03 crc kubenswrapper[4917]: I1212 00:45:03.280452 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:03 crc kubenswrapper[4917]: I1212 00:45:03.414351 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4da950da-6370-4ac4-b4bc-5bff44b376f2-secret-volume\") pod \"4da950da-6370-4ac4-b4bc-5bff44b376f2\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " Dec 12 00:45:03 crc kubenswrapper[4917]: I1212 00:45:03.414679 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4da950da-6370-4ac4-b4bc-5bff44b376f2-config-volume\") pod \"4da950da-6370-4ac4-b4bc-5bff44b376f2\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " Dec 12 00:45:03 crc kubenswrapper[4917]: I1212 00:45:03.414726 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ljjj\" (UniqueName: \"kubernetes.io/projected/4da950da-6370-4ac4-b4bc-5bff44b376f2-kube-api-access-7ljjj\") pod \"4da950da-6370-4ac4-b4bc-5bff44b376f2\" (UID: \"4da950da-6370-4ac4-b4bc-5bff44b376f2\") " Dec 12 00:45:03 crc kubenswrapper[4917]: I1212 00:45:03.415418 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da950da-6370-4ac4-b4bc-5bff44b376f2-config-volume" (OuterVolumeSpecName: "config-volume") pod "4da950da-6370-4ac4-b4bc-5bff44b376f2" (UID: "4da950da-6370-4ac4-b4bc-5bff44b376f2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 00:45:03 crc kubenswrapper[4917]: I1212 00:45:03.420039 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da950da-6370-4ac4-b4bc-5bff44b376f2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4da950da-6370-4ac4-b4bc-5bff44b376f2" (UID: "4da950da-6370-4ac4-b4bc-5bff44b376f2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 00:45:03 crc kubenswrapper[4917]: I1212 00:45:03.420522 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da950da-6370-4ac4-b4bc-5bff44b376f2-kube-api-access-7ljjj" (OuterVolumeSpecName: "kube-api-access-7ljjj") pod "4da950da-6370-4ac4-b4bc-5bff44b376f2" (UID: "4da950da-6370-4ac4-b4bc-5bff44b376f2"). InnerVolumeSpecName "kube-api-access-7ljjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:45:03 crc kubenswrapper[4917]: I1212 00:45:03.516879 4917 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4da950da-6370-4ac4-b4bc-5bff44b376f2-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 00:45:03 crc kubenswrapper[4917]: I1212 00:45:03.516923 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ljjj\" (UniqueName: \"kubernetes.io/projected/4da950da-6370-4ac4-b4bc-5bff44b376f2-kube-api-access-7ljjj\") on node \"crc\" DevicePath \"\"" Dec 12 00:45:03 crc kubenswrapper[4917]: I1212 00:45:03.516952 4917 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4da950da-6370-4ac4-b4bc-5bff44b376f2-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 00:45:04 crc kubenswrapper[4917]: I1212 00:45:04.011388 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" event={"ID":"4da950da-6370-4ac4-b4bc-5bff44b376f2","Type":"ContainerDied","Data":"f3c1a3dde727cea472526ec25d0382171cd85dcbb62d508c29d3985ed77e8648"} Dec 12 00:45:04 crc kubenswrapper[4917]: I1212 00:45:04.011434 4917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3c1a3dde727cea472526ec25d0382171cd85dcbb62d508c29d3985ed77e8648" Dec 12 00:45:04 crc kubenswrapper[4917]: I1212 00:45:04.011513 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425005-ll94k" Dec 12 00:45:04 crc kubenswrapper[4917]: I1212 00:45:04.351761 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr"] Dec 12 00:45:04 crc kubenswrapper[4917]: I1212 00:45:04.359054 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424960-nn5xr"] Dec 12 00:45:04 crc kubenswrapper[4917]: I1212 00:45:04.922699 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w_277c47e2-03cd-4ac4-9125-3379f89dc58c/util/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.167325 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w_277c47e2-03cd-4ac4-9125-3379f89dc58c/util/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.202478 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w_277c47e2-03cd-4ac4-9125-3379f89dc58c/pull/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.205635 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w_277c47e2-03cd-4ac4-9125-3379f89dc58c/pull/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.375281 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w_277c47e2-03cd-4ac4-9125-3379f89dc58c/util/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.412303 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w_277c47e2-03cd-4ac4-9125-3379f89dc58c/pull/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.469874 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpg5w_277c47e2-03cd-4ac4-9125-3379f89dc58c/extract/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.589402 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj_11f5a0c1-fe52-4822-a95b-64e89e66d3c4/util/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.627849 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4ee0d0-175d-436c-9161-2822246aacec" path="/var/lib/kubelet/pods/5c4ee0d0-175d-436c-9161-2822246aacec/volumes" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.700174 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj_11f5a0c1-fe52-4822-a95b-64e89e66d3c4/util/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.778214 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj_11f5a0c1-fe52-4822-a95b-64e89e66d3c4/pull/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.783959 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj_11f5a0c1-fe52-4822-a95b-64e89e66d3c4/pull/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.970008 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj_11f5a0c1-fe52-4822-a95b-64e89e66d3c4/util/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.972285 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj_11f5a0c1-fe52-4822-a95b-64e89e66d3c4/pull/0.log" Dec 12 00:45:05 crc kubenswrapper[4917]: I1212 00:45:05.973151 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109bttj_11f5a0c1-fe52-4822-a95b-64e89e66d3c4/extract/0.log" Dec 12 00:45:06 crc kubenswrapper[4917]: I1212 00:45:06.135226 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn_b3d557ce-f222-460b-96a9-9b7e330f9b82/util/0.log" Dec 12 00:45:06 crc kubenswrapper[4917]: I1212 00:45:06.315748 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn_b3d557ce-f222-460b-96a9-9b7e330f9b82/util/0.log" Dec 12 00:45:06 crc kubenswrapper[4917]: I1212 00:45:06.317807 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn_b3d557ce-f222-460b-96a9-9b7e330f9b82/pull/0.log" Dec 12 00:45:06 crc kubenswrapper[4917]: I1212 00:45:06.329312 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn_b3d557ce-f222-460b-96a9-9b7e330f9b82/pull/0.log" Dec 12 00:45:06 crc kubenswrapper[4917]: I1212 00:45:06.512152 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn_b3d557ce-f222-460b-96a9-9b7e330f9b82/util/0.log" Dec 12 00:45:06 crc kubenswrapper[4917]: I1212 00:45:06.512922 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn_b3d557ce-f222-460b-96a9-9b7e330f9b82/pull/0.log" Dec 12 00:45:06 crc kubenswrapper[4917]: I1212 00:45:06.529186 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f2nntn_b3d557ce-f222-460b-96a9-9b7e330f9b82/extract/0.log" Dec 12 00:45:06 crc kubenswrapper[4917]: I1212 00:45:06.686500 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82_eb199435-b885-4a1b-bff7-cd9f113dfe70/util/0.log" Dec 12 00:45:06 crc kubenswrapper[4917]: I1212 00:45:06.859204 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82_eb199435-b885-4a1b-bff7-cd9f113dfe70/pull/0.log" Dec 12 00:45:06 crc kubenswrapper[4917]: I1212 00:45:06.869397 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82_eb199435-b885-4a1b-bff7-cd9f113dfe70/pull/0.log" Dec 12 00:45:06 crc kubenswrapper[4917]: I1212 00:45:06.883237 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82_eb199435-b885-4a1b-bff7-cd9f113dfe70/util/0.log" Dec 12 00:45:07 crc kubenswrapper[4917]: I1212 00:45:07.051087 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82_eb199435-b885-4a1b-bff7-cd9f113dfe70/util/0.log" Dec 12 00:45:07 crc kubenswrapper[4917]: I1212 00:45:07.052949 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82_eb199435-b885-4a1b-bff7-cd9f113dfe70/extract/0.log" Dec 12 00:45:07 crc kubenswrapper[4917]: I1212 00:45:07.070786 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5efcl82_eb199435-b885-4a1b-bff7-cd9f113dfe70/pull/0.log" Dec 12 00:45:07 crc kubenswrapper[4917]: I1212 00:45:07.222495 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g8xrd_d8f03c62-f504-466a-87d3-15b358c3cd0e/extract-utilities/0.log" Dec 12 00:45:07 crc kubenswrapper[4917]: I1212 00:45:07.398891 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g8xrd_d8f03c62-f504-466a-87d3-15b358c3cd0e/extract-utilities/0.log" Dec 12 00:45:07 crc kubenswrapper[4917]: I1212 00:45:07.418560 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g8xrd_d8f03c62-f504-466a-87d3-15b358c3cd0e/extract-content/0.log" Dec 12 00:45:07 crc kubenswrapper[4917]: I1212 00:45:07.424086 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g8xrd_d8f03c62-f504-466a-87d3-15b358c3cd0e/extract-content/0.log" Dec 12 00:45:07 crc kubenswrapper[4917]: I1212 00:45:07.590851 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g8xrd_d8f03c62-f504-466a-87d3-15b358c3cd0e/extract-utilities/0.log" Dec 12 00:45:07 crc kubenswrapper[4917]: I1212 00:45:07.617853 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g8xrd_d8f03c62-f504-466a-87d3-15b358c3cd0e/extract-content/0.log" Dec 12 00:45:07 crc kubenswrapper[4917]: I1212 00:45:07.724385 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g8xrd_d8f03c62-f504-466a-87d3-15b358c3cd0e/registry-server/0.log" Dec 12 00:45:07 crc kubenswrapper[4917]: I1212 00:45:07.834596 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p79gn_67ac8eda-571d-4185-b4ee-e6eff7fab229/extract-utilities/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.000444 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p79gn_67ac8eda-571d-4185-b4ee-e6eff7fab229/extract-content/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.002546 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p79gn_67ac8eda-571d-4185-b4ee-e6eff7fab229/extract-utilities/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.029412 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p79gn_67ac8eda-571d-4185-b4ee-e6eff7fab229/extract-content/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.162752 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p79gn_67ac8eda-571d-4185-b4ee-e6eff7fab229/extract-utilities/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.204791 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p79gn_67ac8eda-571d-4185-b4ee-e6eff7fab229/extract-content/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.399081 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-882r8_db163491-1bad-4b12-b00c-9f6b83a1b52f/marketplace-operator/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.488820 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccdts_4ba83efd-fe91-4187-ba9a-ee464371ba30/extract-utilities/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.718171 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p79gn_67ac8eda-571d-4185-b4ee-e6eff7fab229/registry-server/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.722115 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccdts_4ba83efd-fe91-4187-ba9a-ee464371ba30/extract-content/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.752749 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccdts_4ba83efd-fe91-4187-ba9a-ee464371ba30/extract-utilities/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.778265 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccdts_4ba83efd-fe91-4187-ba9a-ee464371ba30/extract-content/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.877313 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccdts_4ba83efd-fe91-4187-ba9a-ee464371ba30/extract-utilities/0.log" Dec 12 00:45:08 crc kubenswrapper[4917]: I1212 00:45:08.934017 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccdts_4ba83efd-fe91-4187-ba9a-ee464371ba30/extract-content/0.log" Dec 12 00:45:09 crc kubenswrapper[4917]: I1212 00:45:09.286178 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccdts_4ba83efd-fe91-4187-ba9a-ee464371ba30/registry-server/0.log" Dec 12 00:45:11 crc kubenswrapper[4917]: I1212 00:45:11.602575 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:45:11 crc kubenswrapper[4917]: E1212 00:45:11.603145 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:45:19 crc kubenswrapper[4917]: I1212 00:45:19.048970 4917 scope.go:117] "RemoveContainer" containerID="6fee1af40784151c459d53bc5c9c41f32cbb0868b36b60de3fb433bc2272b9d3" Dec 12 00:45:22 crc kubenswrapper[4917]: I1212 00:45:22.068446 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-jz8h4_ef2dff2d-d381-4578-8da4-a7e49e767228/prometheus-operator/0.log" Dec 12 00:45:22 crc kubenswrapper[4917]: I1212 00:45:22.222419 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-76c76f4448-g22vz_4e46b2b7-70db-4c02-bde2-45fd66f3f151/prometheus-operator-admission-webhook/0.log" Dec 12 00:45:22 crc kubenswrapper[4917]: I1212 00:45:22.271044 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-76c76f4448-rwh9h_3192b300-476f-4127-a662-9636f89655c7/prometheus-operator-admission-webhook/0.log" Dec 12 00:45:22 crc kubenswrapper[4917]: I1212 00:45:22.405998 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-zkjtd_9084360e-9bf8-4c2e-a531-ae62da73050d/operator/0.log" Dec 12 00:45:22 crc kubenswrapper[4917]: I1212 00:45:22.477407 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-q9zcl_e679c2c5-ca09-4af3-9fc6-2b7ae572ec0d/perses-operator/0.log" Dec 12 00:45:24 crc kubenswrapper[4917]: I1212 00:45:24.601574 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:45:24 crc kubenswrapper[4917]: E1212 00:45:24.601941 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:45:38 crc kubenswrapper[4917]: I1212 00:45:38.602516 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:45:38 crc kubenswrapper[4917]: E1212 00:45:38.603585 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:45:53 crc kubenswrapper[4917]: I1212 00:45:53.601750 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:45:53 crc kubenswrapper[4917]: E1212 00:45:53.602994 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:46:05 crc kubenswrapper[4917]: I1212 00:46:05.602137 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:46:05 crc kubenswrapper[4917]: E1212 00:46:05.603026 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:46:13 crc kubenswrapper[4917]: I1212 00:46:13.718422 4917 generic.go:334] "Generic (PLEG): container finished" podID="15e5f551-1e94-4e9f-92b0-83c27066271d" containerID="e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1" exitCode=0 Dec 12 00:46:13 crc kubenswrapper[4917]: I1212 00:46:13.718489 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4f2q/must-gather-zljpq" event={"ID":"15e5f551-1e94-4e9f-92b0-83c27066271d","Type":"ContainerDied","Data":"e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1"} Dec 12 00:46:13 crc kubenswrapper[4917]: I1212 00:46:13.719700 4917 scope.go:117] "RemoveContainer" containerID="e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1" Dec 12 00:46:14 crc kubenswrapper[4917]: I1212 00:46:14.209771 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4f2q_must-gather-zljpq_15e5f551-1e94-4e9f-92b0-83c27066271d/gather/0.log" Dec 12 00:46:18 crc kubenswrapper[4917]: I1212 00:46:18.601929 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:46:18 crc kubenswrapper[4917]: E1212 00:46:18.603272 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.207408 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v4f2q/must-gather-zljpq"] Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.208158 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v4f2q/must-gather-zljpq" podUID="15e5f551-1e94-4e9f-92b0-83c27066271d" containerName="copy" containerID="cri-o://fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880" gracePeriod=2 Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.217057 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v4f2q/must-gather-zljpq"] Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.587689 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4f2q_must-gather-zljpq_15e5f551-1e94-4e9f-92b0-83c27066271d/copy/0.log" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.588571 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4f2q/must-gather-zljpq" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.771407 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7899b\" (UniqueName: \"kubernetes.io/projected/15e5f551-1e94-4e9f-92b0-83c27066271d-kube-api-access-7899b\") pod \"15e5f551-1e94-4e9f-92b0-83c27066271d\" (UID: \"15e5f551-1e94-4e9f-92b0-83c27066271d\") " Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.771452 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15e5f551-1e94-4e9f-92b0-83c27066271d-must-gather-output\") pod \"15e5f551-1e94-4e9f-92b0-83c27066271d\" (UID: \"15e5f551-1e94-4e9f-92b0-83c27066271d\") " Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.779309 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e5f551-1e94-4e9f-92b0-83c27066271d-kube-api-access-7899b" (OuterVolumeSpecName: "kube-api-access-7899b") pod "15e5f551-1e94-4e9f-92b0-83c27066271d" (UID: "15e5f551-1e94-4e9f-92b0-83c27066271d"). InnerVolumeSpecName "kube-api-access-7899b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.792739 4917 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4f2q_must-gather-zljpq_15e5f551-1e94-4e9f-92b0-83c27066271d/copy/0.log" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.793147 4917 generic.go:334] "Generic (PLEG): container finished" podID="15e5f551-1e94-4e9f-92b0-83c27066271d" containerID="fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880" exitCode=143 Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.793199 4917 scope.go:117] "RemoveContainer" containerID="fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.793274 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4f2q/must-gather-zljpq" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.827251 4917 scope.go:117] "RemoveContainer" containerID="e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.839776 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15e5f551-1e94-4e9f-92b0-83c27066271d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "15e5f551-1e94-4e9f-92b0-83c27066271d" (UID: "15e5f551-1e94-4e9f-92b0-83c27066271d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.873485 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7899b\" (UniqueName: \"kubernetes.io/projected/15e5f551-1e94-4e9f-92b0-83c27066271d-kube-api-access-7899b\") on node \"crc\" DevicePath \"\"" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.873534 4917 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15e5f551-1e94-4e9f-92b0-83c27066271d-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.875012 4917 scope.go:117] "RemoveContainer" containerID="fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880" Dec 12 00:46:21 crc kubenswrapper[4917]: E1212 00:46:21.875701 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880\": container with ID starting with fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880 not found: ID does not exist" containerID="fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.875746 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880"} err="failed to get container status \"fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880\": rpc error: code = NotFound desc = could not find container \"fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880\": container with ID starting with fe3e4c06ff5709a0c7879f1eeeb062ad4dfc02a68f19839d19f8c2750363d880 not found: ID does not exist" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.875773 4917 scope.go:117] "RemoveContainer" containerID="e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1" Dec 12 00:46:21 crc kubenswrapper[4917]: E1212 00:46:21.876102 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1\": container with ID starting with e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1 not found: ID does not exist" containerID="e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1" Dec 12 00:46:21 crc kubenswrapper[4917]: I1212 00:46:21.876123 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1"} err="failed to get container status \"e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1\": rpc error: code = NotFound desc = could not find container \"e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1\": container with ID starting with e950e1064047349d2cc20b4ef8413d36544b45f06769b5c04693e267fcb7c8b1 not found: ID does not exist" Dec 12 00:46:23 crc kubenswrapper[4917]: I1212 00:46:23.611136 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e5f551-1e94-4e9f-92b0-83c27066271d" path="/var/lib/kubelet/pods/15e5f551-1e94-4e9f-92b0-83c27066271d/volumes" Dec 12 00:46:30 crc kubenswrapper[4917]: I1212 00:46:30.602327 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:46:30 crc kubenswrapper[4917]: E1212 00:46:30.603033 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:46:42 crc kubenswrapper[4917]: I1212 00:46:42.602073 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:46:42 crc kubenswrapper[4917]: E1212 00:46:42.604612 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:46:53 crc kubenswrapper[4917]: I1212 00:46:53.602257 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:46:53 crc kubenswrapper[4917]: E1212 00:46:53.603140 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:47:08 crc kubenswrapper[4917]: I1212 00:47:08.602269 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:47:08 crc kubenswrapper[4917]: E1212 00:47:08.603708 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:47:21 crc kubenswrapper[4917]: I1212 00:47:21.601796 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:47:21 crc kubenswrapper[4917]: E1212 00:47:21.602566 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:47:34 crc kubenswrapper[4917]: I1212 00:47:34.602322 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:47:34 crc kubenswrapper[4917]: E1212 00:47:34.603519 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:47:45 crc kubenswrapper[4917]: I1212 00:47:45.602833 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:47:45 crc kubenswrapper[4917]: E1212 00:47:45.603636 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:47:58 crc kubenswrapper[4917]: I1212 00:47:58.602268 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:47:58 crc kubenswrapper[4917]: E1212 00:47:58.603588 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:48:10 crc kubenswrapper[4917]: I1212 00:48:10.602444 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:48:10 crc kubenswrapper[4917]: E1212 00:48:10.604076 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:48:22 crc kubenswrapper[4917]: I1212 00:48:22.602927 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:48:22 crc kubenswrapper[4917]: E1212 00:48:22.604849 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:48:35 crc kubenswrapper[4917]: I1212 00:48:35.983856 4917 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-7v779"] Dec 12 00:48:35 crc kubenswrapper[4917]: E1212 00:48:35.984868 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e5f551-1e94-4e9f-92b0-83c27066271d" containerName="gather" Dec 12 00:48:35 crc kubenswrapper[4917]: I1212 00:48:35.984892 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e5f551-1e94-4e9f-92b0-83c27066271d" containerName="gather" Dec 12 00:48:35 crc kubenswrapper[4917]: E1212 00:48:35.984918 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e5f551-1e94-4e9f-92b0-83c27066271d" containerName="copy" Dec 12 00:48:35 crc kubenswrapper[4917]: I1212 00:48:35.984924 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e5f551-1e94-4e9f-92b0-83c27066271d" containerName="copy" Dec 12 00:48:35 crc kubenswrapper[4917]: E1212 00:48:35.984936 4917 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da950da-6370-4ac4-b4bc-5bff44b376f2" containerName="collect-profiles" Dec 12 00:48:35 crc kubenswrapper[4917]: I1212 00:48:35.984943 4917 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da950da-6370-4ac4-b4bc-5bff44b376f2" containerName="collect-profiles" Dec 12 00:48:35 crc kubenswrapper[4917]: I1212 00:48:35.985126 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e5f551-1e94-4e9f-92b0-83c27066271d" containerName="copy" Dec 12 00:48:35 crc kubenswrapper[4917]: I1212 00:48:35.985145 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e5f551-1e94-4e9f-92b0-83c27066271d" containerName="gather" Dec 12 00:48:35 crc kubenswrapper[4917]: I1212 00:48:35.985156 4917 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da950da-6370-4ac4-b4bc-5bff44b376f2" containerName="collect-profiles" Dec 12 00:48:35 crc kubenswrapper[4917]: I1212 00:48:35.985918 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-7v779" Dec 12 00:48:35 crc kubenswrapper[4917]: I1212 00:48:35.997170 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-7v779"] Dec 12 00:48:36 crc kubenswrapper[4917]: I1212 00:48:36.086134 4917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9clz\" (UniqueName: \"kubernetes.io/projected/d315561a-af35-4887-855c-493fb283d5a8-kube-api-access-z9clz\") pod \"service-telemetry-framework-operators-7v779\" (UID: \"d315561a-af35-4887-855c-493fb283d5a8\") " pod="service-telemetry/service-telemetry-framework-operators-7v779" Dec 12 00:48:36 crc kubenswrapper[4917]: I1212 00:48:36.187570 4917 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9clz\" (UniqueName: \"kubernetes.io/projected/d315561a-af35-4887-855c-493fb283d5a8-kube-api-access-z9clz\") pod \"service-telemetry-framework-operators-7v779\" (UID: \"d315561a-af35-4887-855c-493fb283d5a8\") " pod="service-telemetry/service-telemetry-framework-operators-7v779" Dec 12 00:48:36 crc kubenswrapper[4917]: I1212 00:48:36.212161 4917 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9clz\" (UniqueName: \"kubernetes.io/projected/d315561a-af35-4887-855c-493fb283d5a8-kube-api-access-z9clz\") pod \"service-telemetry-framework-operators-7v779\" (UID: \"d315561a-af35-4887-855c-493fb283d5a8\") " pod="service-telemetry/service-telemetry-framework-operators-7v779" Dec 12 00:48:36 crc kubenswrapper[4917]: I1212 00:48:36.310165 4917 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-7v779" Dec 12 00:48:36 crc kubenswrapper[4917]: I1212 00:48:36.536625 4917 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-7v779"] Dec 12 00:48:36 crc kubenswrapper[4917]: I1212 00:48:36.546900 4917 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 00:48:36 crc kubenswrapper[4917]: I1212 00:48:36.668120 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-7v779" event={"ID":"d315561a-af35-4887-855c-493fb283d5a8","Type":"ContainerStarted","Data":"f7f0e0a956f41c42e4e66fc90ddcf72e332b4930f606dda1490017b5d61dc1e8"} Dec 12 00:48:37 crc kubenswrapper[4917]: I1212 00:48:37.602142 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:48:37 crc kubenswrapper[4917]: E1212 00:48:37.602466 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:48:37 crc kubenswrapper[4917]: I1212 00:48:37.678835 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-7v779" event={"ID":"d315561a-af35-4887-855c-493fb283d5a8","Type":"ContainerStarted","Data":"57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef"} Dec 12 00:48:46 crc kubenswrapper[4917]: I1212 00:48:46.311397 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/service-telemetry-framework-operators-7v779" Dec 12 00:48:46 crc kubenswrapper[4917]: I1212 00:48:46.311879 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/service-telemetry-framework-operators-7v779" Dec 12 00:48:46 crc kubenswrapper[4917]: I1212 00:48:46.347587 4917 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/service-telemetry-framework-operators-7v779" Dec 12 00:48:46 crc kubenswrapper[4917]: I1212 00:48:46.364420 4917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-operators-7v779" podStartSLOduration=11.200423094 podStartE2EDuration="11.36439672s" podCreationTimestamp="2025-12-12 00:48:35 +0000 UTC" firstStartedPulling="2025-12-12 00:48:36.546581058 +0000 UTC m=+2551.324381871" lastFinishedPulling="2025-12-12 00:48:36.710554684 +0000 UTC m=+2551.488355497" observedRunningTime="2025-12-12 00:48:37.700809063 +0000 UTC m=+2552.478609926" watchObservedRunningTime="2025-12-12 00:48:46.36439672 +0000 UTC m=+2561.142197533" Dec 12 00:48:46 crc kubenswrapper[4917]: I1212 00:48:46.785063 4917 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/service-telemetry-framework-operators-7v779" Dec 12 00:48:46 crc kubenswrapper[4917]: I1212 00:48:46.837780 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-7v779"] Dec 12 00:48:48 crc kubenswrapper[4917]: I1212 00:48:48.764313 4917 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-operators-7v779" podUID="d315561a-af35-4887-855c-493fb283d5a8" containerName="registry-server" containerID="cri-o://57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef" gracePeriod=2 Dec 12 00:48:49 crc kubenswrapper[4917]: I1212 00:48:49.676867 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-7v779" Dec 12 00:48:49 crc kubenswrapper[4917]: I1212 00:48:49.771801 4917 generic.go:334] "Generic (PLEG): container finished" podID="d315561a-af35-4887-855c-493fb283d5a8" containerID="57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef" exitCode=0 Dec 12 00:48:49 crc kubenswrapper[4917]: I1212 00:48:49.771856 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-7v779" event={"ID":"d315561a-af35-4887-855c-493fb283d5a8","Type":"ContainerDied","Data":"57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef"} Dec 12 00:48:49 crc kubenswrapper[4917]: I1212 00:48:49.771896 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-7v779" event={"ID":"d315561a-af35-4887-855c-493fb283d5a8","Type":"ContainerDied","Data":"f7f0e0a956f41c42e4e66fc90ddcf72e332b4930f606dda1490017b5d61dc1e8"} Dec 12 00:48:49 crc kubenswrapper[4917]: I1212 00:48:49.771917 4917 scope.go:117] "RemoveContainer" containerID="57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef" Dec 12 00:48:49 crc kubenswrapper[4917]: I1212 00:48:49.771963 4917 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-7v779" Dec 12 00:48:49 crc kubenswrapper[4917]: I1212 00:48:49.792032 4917 scope.go:117] "RemoveContainer" containerID="57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef" Dec 12 00:48:49 crc kubenswrapper[4917]: E1212 00:48:49.792575 4917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef\": container with ID starting with 57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef not found: ID does not exist" containerID="57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef" Dec 12 00:48:49 crc kubenswrapper[4917]: I1212 00:48:49.792627 4917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef"} err="failed to get container status \"57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef\": rpc error: code = NotFound desc = could not find container \"57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef\": container with ID starting with 57a76c15a83d8f53e982fed3487e322b663e7e41819a7e34bfa4fa17c2e8aaef not found: ID does not exist" Dec 12 00:48:49 crc kubenswrapper[4917]: I1212 00:48:49.818550 4917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9clz\" (UniqueName: \"kubernetes.io/projected/d315561a-af35-4887-855c-493fb283d5a8-kube-api-access-z9clz\") pod \"d315561a-af35-4887-855c-493fb283d5a8\" (UID: \"d315561a-af35-4887-855c-493fb283d5a8\") " Dec 12 00:48:49 crc kubenswrapper[4917]: I1212 00:48:49.824118 4917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d315561a-af35-4887-855c-493fb283d5a8-kube-api-access-z9clz" (OuterVolumeSpecName: "kube-api-access-z9clz") pod "d315561a-af35-4887-855c-493fb283d5a8" (UID: "d315561a-af35-4887-855c-493fb283d5a8"). InnerVolumeSpecName "kube-api-access-z9clz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 00:48:49 crc kubenswrapper[4917]: I1212 00:48:49.920444 4917 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9clz\" (UniqueName: \"kubernetes.io/projected/d315561a-af35-4887-855c-493fb283d5a8-kube-api-access-z9clz\") on node \"crc\" DevicePath \"\"" Dec 12 00:48:50 crc kubenswrapper[4917]: I1212 00:48:50.110612 4917 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-7v779"] Dec 12 00:48:50 crc kubenswrapper[4917]: I1212 00:48:50.117500 4917 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-7v779"] Dec 12 00:48:51 crc kubenswrapper[4917]: I1212 00:48:51.624539 4917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d315561a-af35-4887-855c-493fb283d5a8" path="/var/lib/kubelet/pods/d315561a-af35-4887-855c-493fb283d5a8/volumes" Dec 12 00:48:52 crc kubenswrapper[4917]: I1212 00:48:52.602182 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:48:52 crc kubenswrapper[4917]: E1212 00:48:52.602786 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:49:05 crc kubenswrapper[4917]: I1212 00:49:05.609408 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:49:05 crc kubenswrapper[4917]: E1212 00:49:05.612170 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:49:17 crc kubenswrapper[4917]: I1212 00:49:17.602136 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:49:17 crc kubenswrapper[4917]: E1212 00:49:17.603079 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:49:32 crc kubenswrapper[4917]: I1212 00:49:32.602202 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:49:32 crc kubenswrapper[4917]: E1212 00:49:32.603407 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:49:46 crc kubenswrapper[4917]: I1212 00:49:46.602213 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:49:46 crc kubenswrapper[4917]: E1212 00:49:46.603530 4917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ktvtt_openshift-machine-config-operator(8bddbc3a-d8cc-4766-80d3-92562e840be5)\"" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" podUID="8bddbc3a-d8cc-4766-80d3-92562e840be5" Dec 12 00:50:00 crc kubenswrapper[4917]: I1212 00:50:00.602868 4917 scope.go:117] "RemoveContainer" containerID="c10fdb23635fdb0a5a4c7342a7911a0fce5f493cfe82005f66893d80bfab8bff" Dec 12 00:50:01 crc kubenswrapper[4917]: I1212 00:50:01.407245 4917 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ktvtt" event={"ID":"8bddbc3a-d8cc-4766-80d3-92562e840be5","Type":"ContainerStarted","Data":"2bcb9a192dabfad7f3cce41bd77ccc7779e6bbbb80995c26178e7d5152289857"}